text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I need to take an image and place it onto a new, generated white background in order for it to be converted into a downloadable desktop wallpaper. So the process would go: ImageDraw This can be accomplished with an Image instance's paste method: from PIL import Image img = Image.open('/pathto/file', 'r') img_w, img_h = img.size background = Image.new('RGBA', (1440, 900), (255, 255, 255, 255)) bg_w, bg_h = background.size offset = ((bg_w - img_w) / 2, (bg_h - img_h) / 2) background.paste(img, offset) background.save('out.png') This and many other PIL tricks can be picked up at Nadia Alramli's PIL Tutorial
https://codedump.io/share/dUv08rgFlr7P/1/how-do-you-composite-an-image-onto-another-image-with-pil-in-python
CC-MAIN-2017-04
refinedweb
103
60.21
Throwing the right Exception/Error (3 messages) - Posted by: Sanjay Noronha - Posted on: April 20 2001 07:37 EDT Hi, V are planning to catch Errors in our beans and throw an EJBException to the Client.The EJBException will take an Error code(String Object) as a parameter. At the client v will catch RemoteException and extract the error code and display a more user friendly message to the user. Thus the only objective of ours is a more user friendly message in case of Errors being escalated to the Web Tier. My doubt is that by catching an Error and Throwing an EJBException, are v loosing out on something? Thanks, Sanjay Threaded Messages (3) - Throwing the right Exception/Error by Tony Brookes on April 20 2001 18:15 EDT - Throwing the right Exception/Error by Sanjay Noronha on April 22 2001 03:04 EDT - Throwing the right Exception/Error by Tony Brookes on April 23 2001 10:54 EDT Throwing the right Exception/Error[ Go to top ] When you say Error do you mean subclasses of java.lang.Error? You shouldn't use these classes as part of your own exception tree, they are really only for things that are likely to kill the JVM. - Posted by: Tony Brookes - Posted on: April 20 2001 18:15 EDT - in response to Sanjay Noronha One option you could use is a composite pattern. public class ChainedException extends Exception { Exception e; int errorCode; public ChainedException(Exception e, int errorCode) { this.e = e; this.errorCode = errorCode; } public void printStackTrace(OutputStream os) { if (e!=null) { e.printStackTrace(os); } else { super.printStackTrace(os); } } public String toString() { return "Chained Exception, with error code " + errorCode + ": " + (e!=null) : e.toString() ? super.toString(); } } That lets you encapsulate an exception inside another one, thus reducing the exception signature of your EJB methods. But since it over-rides all the printStackTrace(...) methods (only one shown above) you will always get the real stack trace when you dump it to the log. So, your bean could.... try { // Do things with a database. } catch (SQLException sqle) { throw new ChainedException(sqle, sqle.getErrorCode()); } .. which would work. You could also handle the error code logic yourself, in which case, write a helper class to encapsulate the decision about the error message and error codes. Hope that helps Chz Tony Throwing the right Exception/Error[ Go to top ] Hi Tony, - Posted by: Sanjay Noronha - Posted on: April 22 2001 03:04 EDT - in response to Tony Brookes Thanks for the information. Guess i have not been too lucid so here goes. V have a framework in place for handling application level exceptions that is akin to the example u have given. V have this little concern which is in points below: 1.Ours is a product and our objective is to provide the end user with as user friendly messages as possible. 2. In case the Application server throws an error as in java.lang.Error then the message that the end user gets may not make much sense to him/her 3. So our solution is, in our bean methods v will have a try -catch block as below X()throws abcException { try{ //business Logic } catch(Error e) { //Log the error throw new EJBException("ERRCODE"); } Now at the client say a JSP, v will catch RemoteException ie App server will throw RemoteException and then v will extract the ERRCODE and by getting the Corresponding err message from a property file(say!) display a message (say) as below: There is some problem in processing ur form.Pls try later Thus what v r getting is clarity for the end user and what v r loosing out on is good design. So is this right? Throwing the right Exception/Error[ Go to top ] I wouldn't do that, personally. - Posted by: Tony Brookes - Posted on: April 23 2001 10:54 EDT - in response to Sanjay Noronha I wouldn't catch it in the beans. Have a standard error page for all JSPs. When unexpected errors happen, you can display the helpful information in there. I can see why you want error codes, particularly for help desks and the like. On the other hand, you don't want to give too much information to end users, it can give hackers too much information when they are trying to blow your site up. Most of the JSP engines will log the exception for you anyway. Chz Tony
http://www.theserverside.com/discussions/thread.tss?thread_id=5908
CC-MAIN-2015-32
refinedweb
738
61.97
Creates video presentations with digital photos, music and recorded stories. It's program for graphic application and web pages graphic elements development. Avidemux is a free video editor designed for simple cutting and filtering. AAA Logo is a powerful logo maker / logo creation software. Blaze Media Pro is a very powerful multimedia player, converter and editor. import about 400 graphic file formatsExport about 50 graphic file formats. InstantBurn is software solution for rewritable DVD disks. EaseUS Data Recovery Wizard Professional does an amazing job on format recovery. BenVista PhotoZoom Pro helps you resize your favorite pictures. Flexible multimedia editing and conversion tool with batch-processing support. You can convert popular formats like AVI, MPEG, WMV, MP4, DivX, RM, MOV and FLV. Xara 3D Maker 7 transforms any texts or shapes into high-quality 3D graphics. PhotoInstrument is an easy to learn tool for editing and retouching photos. Design your good looking business cards easily with Business Card Designer Pro. With Cartoonist 1.3 we can change our photos and create incredible cartoons.
http://ptf.com/hexadecimal/hexadecimal+of+png/index5.html
CC-MAIN-2013-20
refinedweb
172
52.36
Introduction While the flexibility of choices in data organization, storage, compression and formats in Hadoop makes it easy to process the data, understanding the impact of these choices on search, performance and usability allows better design patterns. HDFS is used very commonly for data storing purpose but there are other commonly used systems like HBase or any other NoSQLs which needs to be considered while storing the data. There are number of storage and compression formats, which are suitable for different use cases. For storing raw data there could be different storage format and after processing there could be different storage format. This depends upon access pattern. In below sections we would be covering various such aspects of data storage like file formats, compression strategy and schema design etc. In last section we will explain each concept by taking use case of Investment Risk Analytics. Major considerations for Hadoop data storage File Format There are multiple storage formats which are suitable for storing data in HDFS such as plain text files, rich file formats like Avro and Parquet, Hadoop specific formats like Sequence files. These formats have their own pros and cons depending upon the use cases. Compression Big data solutions should be able to process the large amount of data in quick time. Compressing data would speed up the I/O operations and would save storage space as well. But this could increase the processing time and CPU utilization because of decompression. So balance is required – more the compression – lesser is the data size but more the processing and CPU utilization. Compressed files should also be splittable to support parallel processing.If a file is not splittable, it means we cannot input it to multiple tasks running in parallel and hence we lose the biggest advantage of parallel processing frameworks like Hadoop, Spark etc. Data Storage based on Access pattern Though in any Hadoop system, data resides on HDFS but decision points needs to be considered when to store the data in HDFS or any No SQL DB like HBase or both. This decision depends upon whether random access of data is required and also if frequent updates are required. Data in HDFS is immutable so for frequent updates we need storage like HBase which supports updates. HDFS is good for scan type of queries but if random access to data is require then we should consider HBase. Meta data management When data grows enormously, Meta data management becomes an important part of system. Data which is stored in HDFS it store din self-describing directory structure, which is also part of Meta data management. Typically when your data arrives it is tagged with source and arrival time and based upon these attributes it is also organized in HDFS in self-describing directory structure. When your data is in Hadoop ecosystem (HDFS or HBase) you should be able to query and analyze that data. To do that you should know what kind of data is stored, what attributed each data set holds. By doing proper Meta data management, you should be able to perform these tasks with ease. There are tools like HCatalog (Hive Meta store) which are specifically build for this purpose. WebHCat is REST API for HCatalog. Data Organization Below are the Principles/ considerations to organize data in Hadoop storage layer – - Folder structure should be self-describing to describe what data it holds - Folder structure should also be in line with various stages of processing, if there are multiple processing stages. This is required to re run the batch from any stage if any issue occurs during processing. - Partitioning strategy also describes what should be the directory structure. Typical self-describing folder structure has different data zones like below: So folders structures would be like below – \data\<zone name>\<source>\<arrival timestamp>\<directory name depending upon kind of data it stores> e.g. in trading risk application directory structure would look like below – \data\<zone name>\<source>\<cob>\<data arrival timestamp>\<directory name depending upon kind of data it stores> There are other ways as well to create self-describing folder structure- We should try to follow both the approaches. Common File Formats As we already have discussed in above section that with Hadoop storage layer data is divided into various stages. For sake of simplicity of discussion, let’s classify the stored data into two simple categories – - Raw Data - Processed Data For raw data, access patterns would be different from processed data and hence file formats would be different from processed data. For doing processing over raw data, we usually use all the fields of data and hence our underlying storage system should support such kind of use case efficiently but we will only be accessing only few columns of processed data in our analytical queries so our underlying storage system should be able to handle such case in most efficient way, in terms of disk I/O etc. Raw Data Formats Standard File Formats Plain Text File A very common use case of Hadoop ecosystem is to store log files or other plain text files having unstructured data, for storage and analytics purpose. These text files could easily eat up whole disk space so proper compression mechanism is required depending upon use case. For example some organization uses HDFS just to store archived data. In this case most compact compression is required as there would hardly be any processing on such data. On the other hand if stored data would be used for processing purpose, a splittable file format is required with decent compression level. We should also consider the fact that when data is stored as text files, there would be additional overhead of type conversions. For e.g. storing 10000 as string would take up more space also would require type conversion from String to Int at the time of reading. This overhead grows considerably when we start processing TBs of data. Structured Text Data There are more sophisticated forms of text files having data in some standardized form such as CSV, TSV, XML or JSON files. In Hadoop there is no in built Input format to handle XML or JSON files. Apache Mahout’s XMLInputFormat could be used to process the XML files Currently there is no means to process XML files with Hive. Custom SerDes are required for this purpose. There is no start or End tag in JSON, this makes it challenging to work with JSON files as its difficult to split such files. Elephant Bird is one such library which provides LZOJsonInputFormat to work with JSON files but this means these files should be LZO compressed. The other way to deal with XML and JSON files is to convert them into formats like Avro / Sequence files. Binary files For most cases storing binary files such as images/ videos etc in Container format such as Sequence file is preferred way but for very large binary files store the files as is. Big data specific file formats There are many big data specific file formats such as file-based data structures like sequence files, serialization formats like Avro, columnar formats like Parquet or ORC. Sequence files These files contain data as binary key-value pairs. There are further 3 formats – - Uncompressed – No compression - Record compressed – Records are compressed when they are added to file. - Block compressed – This formats waits until data reaches to block size. Block compressed provides better compression than record compressed. A block refers to group of records compressed together within HDFS block. There could be multiple Sequence file blocks within one HDFS block. Usage, Advantages and Disadvantages - These are mainly used as container for small files. Because storing many small files in HDFS could cause memory issues at NameNode and number of tasks created during processing would also be more, causing extra overhead. - Sequence files contains sync marker to distinguish between various blocks which makes it splittable. So now you can get splittability with non splitable compression format like Snappy. You can compress the individual blocks and retaining the splitable nature using sync markers. - Disadvantages with Sequence file is they are not language neutral and can only be used with Java based application. Uncompressed and Record Compression Block Compression Avro - Avro is language neutral data serialization - Writables has the drawback that they do not provide language portability. - Avro formatted data can be described through language independent schema. Hence Avro formatted data can be shared across applications using different languages. - Avro stores the schema in header of file so data is self-describing. - Avro formatted files are splittable and compressible and hence it’s a good candidate for data storage in Hadoop ecosystem. - Schema Evolution – Schema used to read a Avro file need not be same as schema which was used to write the files. This makes it possible to add new fields. - Avro Schema is usually written in JSON format. We can generate schema files using Avro provided utilities from Java POJOs as well. - Corresponding Java classes can also be generated from avro schema file. - Just as with Sequence Files, Avro files also contains Sync markers to separate the blocks. This makes it splittable. - These blocks can be compressed using compression formats such Snappy and Deflate. - Hive provides the inbuilt SerDe (AvroSerDe) to read-write data in table. - Supported data types – int, boolean, float, string, double, map, array complex data types, nested data types etc. - Example schema file – {“type”: “record”, “name”: “TradeRiskMeasureRow”,”namespace“: “com.vij.gm.riskdata.avro.pojo”, “fields”: [{“name”: “businessDate”,”type”: [“null”,”string”]}, {“name”: “bookId”,”type”: [“null”,”string”]}, {“name”: “riskType”,”type”: [“null”,”string”]}, {“name”: “rating”,”type”: “int“,”default”: -1}, {“name”: “dimensions”, “type”: {“type”: “array”, “items”: {“type”: “record”,”name”: “Dimensions”, “fields”: [{“name”: “dimension”, “type”: {“type”: “map”,”values”: “string” } }, {“name”: “amount”,”type”: “double”} ] } } }] } There are other formats as well like Thrift and Protobuf which are similar to Avro. But now Avro has become de-facto standard so we are not discussing those formats. Processed data file formats Columnar Formats - They eliminate I/O for columns that are not part of query. So works well for queries which require only subset of columns. - Provides better compression as similar data is grouped together in columnar format. Parquet - Parquet is a columnar format. Columnar formats works well where only few columns are required in query/ analysis. - Only required columns would be fetched / read, it reduces the disk I/O. - Parquet is well suited for data warehouse kind of solutions where aggregations are required on certain column over a huge set of data. - Parquet provides very good compression upto 75% when used with compression formats like snappy. - Parquet can be read and write using Avro API and Avro Schema. - It also provides predicate pushdown, thus reducing further disk I/O cost. Predicate Pushdown / Filter Pushdown It is the concept which is used at the time of reading data from any data store. It is followed by most of RDBMS and now has been followed by big data storage formats like Parquet and ORC as well. When we give some filter criteria, data store try to filter the records at the time of reading from disk. This concept is called Predicate pushdown. Advantage of predicate pushdown is fewer disks I/O will happen and hence performance would be better. Otherwise whole data would be brought into memory and then filtering needs to be done, which results in large memory requirement. Projection Pushdown When data is read from data store, only those columns would be read which are required as per the query, not all the fields would be read. Generally columnar formats like Parquets and ORC follow this concept, which results in better I/O performance. Prior to Spark 1.6 release, predicate pushdown was not supported on nested / complex data types. But now we can leverage the advantages of predicate pushdown for nested and complex data types as well. And also from Spark 1.6 release onwards, predicate pushdown is turned on by default. For earlier versions, to enable predicate pushdown below command was required – sqlContext.sql(“SET spark.sql.parquet.filterPushdown=false”) Note: Up till Spark 1.6 there are issues with predicate pushdown with String / binary data types. However huge benefits are seen with int datatypes. To check the performance with and without filter pushdown, we performed some experiments on our sample trading risk data which has attribute “Rating”. Rating is the indicator of performance of underlying instrument, how safe is to trade on that underlying instrument. In our sample data, we have taken only two rating, rating 1 and rating 2. Rating is of type int. So ideally if predicate pushdown is working, the disk I/O should be nearly half. Also if we look at the duration, there is massive difference. val df = sqlContext.read.parquet(“D://riskTestData/output/20160704/parquet”) df.registerTempTable(“RiskData”) val df1=sqlContext.sql(“select * from RiskData where rating=1”) df1.collect() sqlContext.sql(“SET spark.sql.parquet.filterPushdown=false”) val df = sqlContext.read.parquet(“D://riskTestData/output/20160704/parquet”) df.registerTempTable(“RiskData”) val df1=sqlContext.sql(“select * from RiskData where rating=1”) df1.collect() Object Model, Object Model converters And Storage Format - Object model is in memory representation of data. In Parquet it is possible to change the Object model to Avro which gives a rich object model. - Storage format is serialized representation of data on disk. Parquet has columnar format. - Object Model converters are responsible for converting the Parquet’s data type data into Object Model data types. Hierarchically, a file consists of one or more row groups. A row group contains exactly one column chunk per column. Column chunks contain one or more pages. Configurability and optimizations - Row group size: Larger row groups allow for larger column chunks which makes it possible to do larger sequential IO. Larger groups also require more buffering in the write path (or a two pass write). We recommend large row groups (512MB – 1GB). Since an entire row group might need to be read, we want it to completely fit on one HDFS block. Therefore, HDFS block sizes should also be set to be larger. An optimized read setup would be: 1GB row groups, 1GB HDFS block size, 1 HDFS block per HDFS file. - Data page size: Data. Avro converter stores the schema of data in footer of parquet file which can be inspected using following command – $ hadoop parquet.tools.Main meta tradesRiskData.parquet creator: parquet-mr (build 3f25ad97f209e7653e9f816508252f850abd635f) extra: avro.schema = {“type”:”record”,”name”:”TradeRiskMeasureRow”,”namespace” [more]… We can also see the Parquet schema using below command – $ hadoop parquet.tools.Main schema tradesRiskData.parquetmessage com.vij.gm.riskdata.avro.pojo.TradeRiskMeasureRow{ required string businessDate (UTF8); required string bookId;[more] … KiteSDK - KiteSDK is the open source library by Cloudera. It is a high-level data layer for Hadoop. It is an API and a set of tools that speed up development. - It can be used to read data from HDFS, transform it in your Avro model and store it as Parquet data. This way object model would be in Avro and storage would be Parquet columnar format. ORC - Columnar format - Splittable - Stores data as group of rows and each group has columnar storage. - Gives Indexing within row group. - Not general purpose as it was designed to perform with Hive. We cannot integrate it with query engine like Impala. Writing and Reading ORC file in Spark import org.apache.spark.sql.hive.orc._ val df = sqlContext.read.parquet(“D://tmp/output/20160604/parquet”) df.write().partitionBy(“businessDate”).partitionBy(“book_Id”).format(“orc”).save(“D://tmp/output/201 04/orc”) val df_orc = sqlContext.read.orc(“D://tmp/output/20160604/orc”) Note: When we compared the read/write time of ORC with Parquet, Parquet was winner. For same data set ORC data size was more when compared with Parquet. Also the reading performance was also not good. May be we need to try ORC with some compression format because as per some document found over internet, ORC has better performance over Parquet. There are other issues as well mainly related to Spark Integration with ORC. Though this has been resolved but would be available in Spark 2.0 Columnar v/s Row formats OR Parquet v/s Avro Columnar formats are generally used where you need to query upon few columns rather than all the fields in row because their column oriented storage pattern is well suited for the same. On the other hand Row formats are used where you need to access all the fields of row. So generally Avro is used to store the raw data because during processing usually all the fields are required. Note : People also prefer to keep the raw data in original format in which it was received and also store the Avro converted data which in my opinion seems to be waste of resources unless until there are stringent requirements to store the data In original format. As long as if you can prove and back trace the Avro converted data to original formatted data there is no need to store both the copies of data. Storing actual Raw textual data has its own disadvantages like type conversions, more disk space etc. Compression Schema Design Though storing data in Hadoop is schema less in nature yet there are many other aspects which need to be taken care. This includes directory structure in HDFS as well as output of data processing. This also includes schemas of object stores such as HBase. Partitioning It is common way to reduce the disk I/O while reading from HDFS. Usually data in HDFS is very huge and reading the whole data set is not possible and also not required in many cases. A good solution is to break the data into chunks/ partitions and read the required chunk. For example if you have trade data of various business dates, partitioning could be done on business date. When placing the data in the filesystem, you should use the following directory format for partitions: <data set name>/<partition_column_name=partition_column_value>/{files}. For example, tradeRiskData/businessDate=20160501/{book1.parquet, book2.parquet} Further partitioning could also be done at book level. tradeRiskData/businessDate=20160501/book=book1/{ parquet file} tradeRiskData/businessDate=20160501/book=book2/{parquet file} This directory structure is understood by HCatalog, Spark, Hive, Impala, and Pig, which can leverage partitioning to reduce the amount of I/O required during processing. Partitioning considerations - Do not partition on such column where you end up with too many partitions. - Do not partition on such column where you end up with so many small files with in partitions. - Good to have partition size of ~ 1 GB. Bucketing Sometimes if we try to partition the data on some attribute for which cardinality is high, the number of partitions created would be more. For e.g if we try to partition the data by tradeId then number of partitions created would be too huge. This could cause “Too many Small files” issue, resulting in memory issues at Name node and also processing tasks created would be more (equal to number of files) when processed with Apache Spark. In such scenarios, creating hash partitions / buckets is advisable. An additional advantage of bucketing is when joining two datasets like joining trade attributes data (Deal number, Deal date etc.) with trade risk attributes data(Risk measure type, risk amount ) for reporting purpose, based upon tradeId. Denormalization and Pre-aggregations In Hadoop it is advisable to store the data in Denormalized form so that there are less requirements of joining the data. Joins are slowest operations in Hadoop as it involves large shuffling of data. So data should be preprocessed for denormalization and pre aggregated as well if frequent aggregations are required. Flatten v/s Nested Both in Avro and Parquet you can have Flat structured data as well as Nested Structure data. As we discussed earlier prior to Spark 1.6, predicate pushdown was not working on nested structures but now issue has been resolved. So storing data in nested structures is possible. Case Study – Risk Data Analytics Typical data in Risk analytics contains data of Risk Measures/ types at trade level, calculated at daily basis. These trades are booked under some books. So sample data looks like below – businessDate, book, trade, riskType,< other attributes like curve, surface, tenor, underlying instrument, counterparty,issuer,rating etc.>,amount There could be ~50K-75K books in a bank depending upon its size and trading activity. This risk data generally comes in the form of CSV/ XML file formats from Risk Calculation Engines to Risk Reporting System. Per book there could be ~50K-1M trades, reported on daily bases. Typical reports are Risk Data Aggregated by Book, Risk Data Aggregated by Counterparty, Risk Data by Trade, Comparison reports to compare todays Risk data with previous business dates. There are more of interactive queries as well where users like to do slice-dice on data on various attributes. With the change in reporting regulations, now banks need to store this data for 7 years and run some calculations and do some reporting on this data. Also banks are now moving towards more real time risk reporting rather than N+1 Day reporting. This has posed a huge challenge and banks are now moving towards use of technologies like Big Data to meet these challenges. To meet the real time reporting requirements, use case of streaming data has also evolved. In this case study we will only be considering, batch use case. In our case study we have assumed that Risk data is coming in CSV format. This data is coming in landing zone in CVS format and then we are storing this data in staging zone. During processing data is in transient zone but once it gets processed, processed data is moved to Data Hub in Parquet format. In transient zone data is in Avro format as transformations etc. that needs to be done during processing would involve all the fields of rows. In raw zone data is stored in Avro format because data in this zone is stored for historical purpose. Processed data is moved to Data hub zone for analytical purpose in Parquet format as analytics query only fetch few columns or run on few columns rather than all the fields of rows. Staging zone data is in Avro as no processing would happen on this zone data. Here data just waits for its turn to get processed. When processing needs to be done, this data needs to be copied to Transient zone. At the same time it is moved to Raw zone as well. Here we have sub divided raw zone in two – one to handle intraday data and other to store end of the day data. This pattern is typically followed when small files keeps on coming for entire day and at the end of day all the data needs to be consolidated into one file to avoid “Too many Small files issue”. This can be done using small Spark job. This is not a only pattern in which data flows between different zones, there could be variations in data flows. Like any typical ingestion scenario, for this case study as well we have used below three Spark jobs – This job has been written in Spark. We have used KiteSDK library to interact with HDFS. While converting CSV to Avro we realized that we need some function which could take the data in one data model, consolidate/combine the required rows based upon some key and give data in new data model. This is required because we want to keep data of one trade for one risk type together in one row. Raw data has multiple rows for same trade and same risk type. Spark has one utility function “combineByKey” which solved our purpose. How combineByKey works combineByKey takes three arguments – - Combiner function - Merge function - Merge combiner function Combiner function – lambda csvFlattenRecord: (csvFlattenRecord, avroNestedRecord) Merge function – lambda csvRecord, avroNestedRecord value: (avroNestedRecord, avroNestedRecord2) Merge combiner function – lambda avroNestedRecord, avroNestedRecord2: (avroNestedRecord3) More detailed explanation can be found here – Avro to Avro-Parquet format and Nested Data Both Avro and Parquet supports complex and nested data types. As we already have seen, we can have Avro object model backup by Parquet storage, so using Avro-Parquet format with Nested data is obvious choice for data modelling. For modelling our Trading Risk data we thought about many data models but we were looking for such model which is easily extendable so that if a new risk measure comes with some new dimension we could easily accommodate it. Approach 1 – Different Risk Measures in individual Parquet files In this approach we thought about storing different risk measure (Vector, Matrix, Cube or 4D tensor) in different parquet files with different schema for each. Schema for Vector risk measures – {“type”: “record”,”name”: “TradeVectorRiskMeasure“,”namespace”: “com.vij.gm.riskdata.avro.pojo”,”fields”: [ { “name”: “isin”, “type”: “string”},{ “name”: “issuer”, “type”: “string”},{ “name”: “tradeId”,”type”: “string”}, { “name”: “businessDate”, “type”: “string” }, { “name”: “bookId”,”type”: “string”},{ “name”: “riskType”,”type”: “string”}, {“name”: “curve”, “type”: “string” }, { “name”: “riskMeasure”, “type”: {“type”: “array”, “items”: {“type”: “record”, “name”: “VectorRiskMeasureRow“, “fields”: [{“name”: “tenor”,”type”: “string” },{“name”: “amt”, “type”: “double”} ] }}}]} Schema for Matrix risk measures – {“type”: “record”, “name”: “TradeMatrixRiskMeasure“, “namespace”: “com.vij.gm.riskdata.avro.pojo”, “fields”: [ { “name”: “isin”, “type”: “string”}, {“name”: “issuer”, “type”: “string”},{ “name”: “tradeId”, “type”: “string”}, {“name”: “businessDate”, “type”: “string”},{“name”: “bookId”,”type”: “string” },{ “name”: “riskType”, “type”: “string”}, {“name”: “surface”, “type”: “string”},{ “name”: “riskMeasure”, “type”: {“type”: “array”,”items”: {“type”: “record”, “name”: “MatrixRiskMeasureRow“, “fields”: [{“name”: “rateShift”,”type”: “string”},{“name”: “volShift”,”type”: “string”}, {“name”: “amt”, “type”: “double” }] }}}] } Query and Aggregation Use of explode() makes the job easier while working with Complex/Nested structures like Array/Maps/Struct. Explode() is spark-sql method.It explodes the object to its granular level and exposes the attributes as column.This would help us in aggregation queries. This slideshow requires JavaScript. Advantage - Extendible as we can create new schema if new risk measure comes with additional dimension and store it in separate Parquet file. - By looking at meta-data, user will able to know which fields are present in particular Parquet file. Disadvantages - Code needs to be changed / added every time new Risk measure is added to store and read new Parquet file. - There could be some risk measures for which dimension name is different e.g in Vector type (1 dimensional risk measure) we have taken dimension name as Tenor and its hard coded as field name. For some other risk measure it could be different then Tenor. - If somebody needs to get all the risk measures for one book or one trade, query needs to run against all the different Parquet stores. Approach 2 – Combined Parquet file for all risk measure with generic dimension names Instead of hardcoding the dimensions name, make it generic. Also store all the risk measures in single file. { ”: “dimensions”, “type”: {“type”: “array”, “items”: {“type”: “record”,”name”: “Dimensions“,”fields”: [{“name”: “firstDimension“,”type”: [“null”,{ “type”:”record”, “name”: “Dimension”,”fields”: [{“name”: “name”, “type”: [“null”,”string”] },{ “name”: “value”,”type”: [“null”,”string”]} ] }]}, {“name”: “secondDimension“, “type”: [“null”,”Dimension”] }, {“name”: “thirdDimension“,”type”: [“null”,”Dimension”]}, { “name”: “amount”,”type”: “double”}] }}}] } In this approach we have already created different dimensions. Later on if some new risk measure come with additional dimension, it would be added to Object Model and to Parquet file. Both Parquet and Avro support schema evolution so it would be easy to add new fields. Query and Aggregation This slideshow requires JavaScript. Filtering data from nested part This slideshow requires JavaScript. Advantage - Single query would be required to get all the risk measure of one trade or one book as there is only one parquet file. - Extensible as dimension is now generic. Disadvantage - Still code needs to be changed if some risk measure comes with additional dimension. Though it would rarely happen that such a new risk measure would be created for which number of dimensions needs to be increased i.e from 4D risk measure to 5D risk measures. - To query the data, dimension name needs to be known up front; otherwise extra query needs to be made to know the name of dimension for a particular risk type. Approach 3 – Generic Schema with no/minimal code changes for new dimensions To resolve challenges of above approach, we have created a new model which is generic in nature. We would be using Map to store dimensions. {”: “rating”,”type”: “int”,”default”: -1}, {“name”: “dimensions”,”type”: {“type”: “array”,”items”: { “type”: “record”, “name”: “Dimensions”,”fields”: [{“name”: “dimension”, “type”: {“type”: “map”,”values”: “string”}}, {“name”: “amount”,”type”: “double”}] }}}] } Query and Aggregations This slideshow requires JavaScript. Approach 4 – Flatten Schema We can also use a below flatten schema – {“type”:”record”,”name”:”RiskMeasureFlattenRecord”, “namespace”:”com.vij.gm.riskdata.avro.pojo”,”fields”:[ {“name”:”businessDate”,”type”:”string”},{“name”:”tradeId”,”type”:”string”},{“name”:”isin”,”type”:”string”}, {“name”:”issuer”,”type”:”string”},{“name”:”bookId”,”type”:”string”},{“name”:”measureType”,”type”:”string”}, {“name”:”curve”,”type”: [“null”,”string”]},{“name”:”surface”,”type”: [“null”,”string”]}, {“name”:”amount”,”type”:”double”},{“name”:”rating”,”type”:”int”}, {“name”:”dimensionMap”,”type”:{“type”:”map”,”values”:”string”}}]} When we store data using this schema, it takes little bit more space as compared to Approach 3. Approach 4 modelled data was taking bit more time and space as compared to Approach 3. This slideshow requires JavaScript. Tips for performance improve while working with Spark SQL Impact of setting spark.sql.shuffle.partitions This parameter decides how many reducers will run when we will make the aggregation or group by query. By default its value is 200. This means 200 reducers will run and hence shuffle would be more. Instead if decrease it to 10, then only 10 reducers would run and hence less shuffling would happen and hence performance would be better. sqlContext.sql(“SET spark.sql.shuffle.partitions=10”) Bring all the data related to one book into single partition When we started processing over CSV data, data for single book was distributed across different RDD partitions so when we writing the Parquet file partitioned by bookId, multiple Parquet files were getting created for single book.This has drawback that when you will query the data for single book, it would create multiple tasks depending upon number of files in that partition (partitioned by bookId). So if query is made on some higher level, number of tasks spanned would be more and performance issues if your nodes does not have that much compute capcity. So to bring data of all trades belonging to single book in one partition, a new Partitioner was written to partition the RDD by book. Partition the data To reduce the disk I/O it is important to properly strategies your partitioning strategy. We used partition by business date followed by partition by book as our most of the queries are business date specific and within one business date, its book specific. tradeRiskData/businessDate=20160501/book=book1/{ parquet file} tradeRiskData/businessDate=20160501/book=book2/{parquet file} We didn’t followed partition by trade as it has its drawbacks. References - Hadoop in Practice, Second Edition - Data Lake Development with Big Data - Hadoop Application Architectures - Dremel made simple with parquet 3 thoughts on “Data Storage and Modelling in Hadoop” Awesome…..explanation Brilliant explain, perfect use case. Thanks! No words….wonderful blog. LikeLiked by 1 person
https://techmagie.wordpress.com/2016/07/15/data-storage-and-modelling-in-hadoop/
CC-MAIN-2017-47
refinedweb
5,202
53.71
The if statement check for validity of an expression and execute a block of code once. We use a while statement to run a block of code if an expression is evaluated to true. Say you are a rock fan and you are collecting rock albums and would like to stop when you have collected 10 albums. #include <iostream> #include <string> using namespace std; void main() { int album = 0; while (album < 10) { album++; cout << "I have " << album << " albums\n"; } cout << "\n"; cout << "I have collected 10 rock albums\n"; cout << endl; } Let’s first look at the statement album++. The ++ operators are put after the album and this is called post-increment. Since album is an integer, it will add a value of 1 to it. For example, in the above example, the album starts with 0 but after running album++ in line 11, the first line printed is: I have 1 albums. It is worthwhile to figure out why the last album printed out is 10 when the while expression is (album < 10) in line 10.
https://codecrawl.com/2014/12/29/cplusplus-while-loop/
CC-MAIN-2019-43
refinedweb
177
66.47
Search Criteria Package Details: hydrus 348-1 Dependencies (21) - gtkglext - hdf5 (hdf5-openmpi-java, hdf5-java, hdf5-openmpi) - opencv (opencv-with-python2-support, opencv-cuda-git, opencv-git, opencv2, opencv2-samples, opencv-cuda) - python (python-dbg) - python-beautifulsoup4 - python-html5lib (python-html5lib-git) - python-numpy (python-numpy-mkl, python-numpy-openblas) - python-pillow (python-pillow-simd) - python-psutil - python-pysocks - python-requests - python-send2trash - python-twisted - python-yaml - python-lz4>=0.10.1 - python-wxpython>=4.0.0 - git (git-git) (make) - desktop-file-utils (desktop-file-utils-git) (optional) – to add Hydrus to your desktop environment menus -) (optional) – show duration and other information on video thumbnails - miniupnpc (miniupnpc-git) (optional) – automatic port forwarding - python-matplotlib (python-matplotlib-git) (optional) – bandwidth usage graphs Latest Comments 1 2 3 4 5 Next › Last » irlittz commented on 2019-01-30 19:06 python-matplotlib should be either added as a dependency or optional dependency, since it provides functionality within Hydrus (rendering charts of traffic in the network->review bandwidth usage tab). nadomodan commented on 2018-12-13 17:30 @jtmb edit and maintainer, besides this being the last python2 release, as of 334 hydrus no longer uses pafy according to release notes so that dependency can be removed jtmb commented on 2018-12-01 06:03 @Score_Under, I was able to pull the pkgbuild from before python2-wxpython-phoenix was deleted and build it successfully. I wasn't getting any core.so issues, so hopefully it will work until they change over to python3. Edit: so apparently python2-pafy isn't getting built in its split package anymore, either, so here is a PKGBUILD to make python2-pafy until that switches over. nadomodan commented on 2018-11-29 11:40 Good news is you won't have to deal with python2 for much longer Score_Under commented on 2018-11-29 03:25 Well, I've done something I know I will regret, and that's create a package for opencv with python2 support. That said, python2-wxpython-phoenix has finally kicked the bucket, so that will need replacing. However, every build I have compiled on my own machine has somehow been missing an "init_core" function in its "_core.so". I can't make heads nor tails of this, and gave up some time early October, only to try again now and be faced with exactly the same problem. If anyone can figure out what on earth is going on here & how to compile wxpython-phoenix for python2, I would be grateful for the help. jtmb commented on 2018-11-23 22:55 @Score_Under same as @nadomodan, opencv 3.4.4-1, no /usr/lib/python2.7/site-packages/cv2.so nadomodan commented on 2018-11-23 18:51 @Score_Under I have the same problem, opencv 3.4.4-1: pacman -Qo /usr/lib/python2.7/site-packages/cv2.so error: No package owns /usr/lib/python2.7/site-packages/cv2.so downgrading to opencv 3.4.3-5 on which hydrus works gives this output: pacman -Qo /usr/lib/python2.7/site-packages/cv2.so /usr/lib/python2.7/site-packages/cv2.so is owned by opencv 3.4.3-5 finnickal commented on 2018-11-23 16:13 I had the same problem as @jtmb and installed opencv2 from the AUR to solve it. Score_Under commented on 2018-11-23 12:43 @jtmb, I can't reproduce this. Can you tell me what version of opencv you have installed, and whether it drops a "cv2.so" in /usr/lib/python2.7/site-packages? jtmb commented on 2018-11-21 19:03 client fails to start with import cv2 ImportError: No module named cv2 I believe it's related to opencv but I haven't figured out a workaround.
https://aur.archlinux.org/packages/hydrus/
CC-MAIN-2019-18
refinedweb
627
51.38
Now that the JavaFX SDK Technology Preview has been released, I'd like to get you up to speed on how to create your own "custom nodes". This is JavaFX-speak for widgets, gadgets, UI components, whatever, but the purpose is the same: to be able to create a potentially reusable UI thingy for JavaFX programs. Today's example demonstrates how to create a custom node (in fact, two), and here's a screenshot: By the way, a big thanks goes to Edgar Merino for pointing out some simplifications to the code that have now been implemented in this example. If you would like to try it out, click on this Java Web Start link, keeping in mind that you'll need at least JRE 6. Also, installing Java SE 6 update 10 will give you faster deployment time. As I mentioned in the JavaFX SDK Packages are Taking Shape post, JavaFX is adopting a graphical "node-centric" approach to UI development, so nearly everything in a JavaFX user interface is a Node. When you want to create your own custom node, you'll extend the CustomNode class, giving it your desired attributes and behavior. Shown below is the code for the custom node in the example that displays an image and responds to mouse events (e.g. becoming more translucent and showing the text when rolling the mouse over). Note: You may be wondering why I don't just use the Button class that is located in the javafx.ext.swing package. The reason is that the Button class is a Component, not a Node, and I think that it is best to follow the stated direction of moving to a node-centric approach. At some point there will be a button that subclasses Node, at which point the ButtonNode class in this example may not be needed anymore. ButtonNode.fx /* * ButtonNode.fx - * A node that functions as an image button * * Developed 2008 by James L. Weaver (jim.weaver at lat-inc.com) * and Edgar Merino () to demonstrate how * to create custom nodes in JavaFX */ package com.javafxpert.custom_node; import javafx.animation.*; import javafx.input.*; import javafx.scene.*; import javafx.scene.effect.*; import javafx.scene.geometry.*; import javafx.scene.image.*; import javafx.scene.paint.*; import javafx.scene.text.*; import javafx.scene.transform.*; public class ButtonNode extends CustomNode { /** * The title for this button */ public attribute title:String; /** * The Image for this button */ private attribute btnImage:Image; /** * The URL of the image on the button */ public attribute imageURL:String on replace { btnImage = Image { url: imageURL }; } /** * The percent of the original image size to show when mouse isn't * rolling over it. * Note: The image will be its original size when it's being * rolled over. */ public attribute scale:Number = 0.9; /** * The opacity of the button when not in a rollover state */ public attribute opacityValue:Number = 0.8; /** * The opacity of the text when not in a rollover state */ public attribute textOpacityValue:Number = 0.0; /** * A Timeline to control fading behavior when mouse enters or exits a button */ private attribute fadeTimeline = Timeline { toggle: true keyFrames: [ KeyFrame { time: 600ms values: [ scale => 1.0 tween Interpolator.LINEAR, opacityValue => 1.0 tween Interpolator.LINEAR, textOpacityValue => 1.0 tween Interpolator.LINEAR ] } ] }; /** * This attribute is interpolated by a Timeline, and various * attributes are bound to it for fade-in behaviors */ private attribute fade:Number = 1.0; /** * This attribute represents the state of whether the mouse is inside * or outside the button, and is used to help compute opacity values * for fade-in and fade-out behavior. */ private attribute mouseInside:Boolean; /** * The action function attribute that is executed when the * the button is pressed */ public attribute action:function():Void; /** * Create the Node */ public function create():Node { Group { var textRef:Text; content: [ Rectangle { width: bind btnImage.width height: bind btnImage.height opacity: 0.0 }, ImageView { image: btnImage opacity: bind opacityValue; scaleX: bind scale; scaleY: bind scale; translateX: bind btnImage.width / 2 - btnImage.width * scale / 2 translateY: bind btnImage.height - btnImage.height * scale onMouseEntered: function(me:MouseEvent):Void { mouseInside = true; fadeTimeline.start(); } onMouseExited: function(me:MouseEvent):Void { mouseInside = false; fadeTimeline.start(); me.node.effect = null } onMousePressed: function(me:MouseEvent):Void { me.node.effect = Glow { level: 0.9 }; } onMouseReleased: function(me:MouseEvent):Void { me.node.effect = null; } onMouseClicked: function(me:MouseEvent):Void { action(); } }, textRef = Text { translateX: bind btnImage.width / 2 - textRef.getWidth() / 2 translateY: bind btnImage.height - textRef.getHeight() textOrigin: TextOrigin.TOP content: title fill: Color.WHITE opacity: bind textOpacityValue font: Font { name: "Sans serif" size: 16 style: FontStyle.BOLD } }, ] }; } } Some things to note in the ButtonNode.fx code listing above are: - Our ButtonNode class extends CustomNode - This new class introduces attributes for storing the image and text that will appear on the custom node. - The create() function returns the declarative expression of our custom node's UI appearance and behavior. - The Glow effect in the javafx.scene.effect package is used to brighten the image when clicked. - The opacity of the image, the size of the image, and the title of the custom node, are transitioned as the mouse enters and exits the button. A Timeline is employed to make these transitions gradual. - After adjusting opacity and applying a glow effect, the onMouseClicked function calls the action() function attribute defined earlier in the listing. This make our custom node behave like the familiar Button. Arranging the ButtonNode instances into a "menu" As shown in the Setting the "Stage" for the JavaFX SDK post, the HBox class is located in the javafx.scene.layout package, and is a node that arranges other nodes within it. The MenuNode custom node shown below arranges the ButtonNode instances horizontally, and it uses the Reflection class in the javafx.scene.effects package to add a nice reflection effect below the buttons. Here's the code: MenuNode.fx /* * MenuNode.fx - * A custom node that functions as a menu * * Developed 2008 by James L. Weaver (jim.weaver at lat-inc.com) * to demonstrate how to create custom nodes in JavaFX */ package com.javafxpert.custom_node; import javafx.scene.*; import javafx.scene.effect.*; import javafx.scene.layout.*; public class MenuNode extends CustomNode { /* * A sequence containing the ButtonNode instances */ public attribute buttons:ButtonNode[]; /** * Create the Node */ public function create():Node { HBox { spacing: 10 content: buttons effect: Reflection { fraction: 0.50 topOpacity: 0.8 } } } } Using our custom nodes Now that the custom nodes have been defined, I'd like to show you how to use them in a simple program. If you've followed this blog, you know that "the way of JavaFX is to bind the UI to a model". In this simple example, since I really want to focus on teaching you how to create custom nodes, I'm not going to complicate things by creating a model and binding the UI to it. Rather, I'm simply printing a string to the console whenever a ButtonNode instance is clicked. Here's the code for the main program in this example: MenuNodeExampleMain.fx /* * MenuNodeExampleMain.fx - * An example of using the MenuNode custom node * * Developed 2008 by James L. Weaver (jim.weaver at lat-inc.com) * to demonstrate how to create custom nodes in JavaFX */ package com.javafxpert.menu_node_example.ui; import javafx.application.*; import javafx.scene.paint.*; import javafx.scene.transform.*; import java.lang.System; import com.javafxpert.custom_node.*; Frame { var stageRef:Stage; var menuRef:MenuNode; title: "MenuNode Example" width: 500 height: 400 visible: true stage: stageRef = Stage { fill: Color.BLACK content: [ menuRef = MenuNode { translateX: bind stageRef.width / 2 - menuRef.getWidth() / 2 translateY: bind stageRef.height - menuRef.getHeight() buttons: [ ButtonNode { title: "Play" imageURL: "{__DIR__}icons/play.png" action: function():Void { System.out.println("Play button clicked"); } }, ButtonNode { title: "Burn" imageURL: "{__DIR__}icons/burn.png" action: function():Void { System.out.println("Burn button clicked"); } }, ButtonNode { title: "Config" imageURL: "{__DIR__}icons/config.png" action: function():Void { System.out.println("Config button clicked"); } }, ButtonNode { title: "Help" imageURL: "{__DIR__}icons/help.png" action: function():Void { System.out.println("Help button clicked"); } }, ] } ] } } Notice that the action attributes are assigned functions that are called whenever the user clicks the mouse on the corresponding ButtonNode, as pointed out earlier. Also notice the the __DIR__ expression evaluates to the directory in which the CLASS file resides. In this case, the graphical images are located in a com/javafxpert/menu_node_example/ui/icons directory. By the way, the images for this article can be downloaded so that you can build and run this example with the graphics. This is a zip file that you can expand in the project's classpath. It is my intent to build up a library of useful custom nodes for the JavaFX SDK Technology Preview and post them in the JFX Custom Nodes category of this blog. If you have ideas for custom nodes, or would like to share ones that you've developed, please drop me a line at jim.weaver at lat-inc.com By the way, after this post ran, Weiqi Gao reported some cool news in his Java WebStart Works On Debian GNU/Linux 4.0 AMD64 post. I'm partial to Weiqi (pronounced way-chee), of course, because he did a great job on the technical review of our JavaFX Script book ;-) Thanks, Jim Weaver JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications Immediate eBook (PDF) download available at the book's Apress site Thx a lot for this nice menu node it's really great. But maybee you can answer me a question, I'm using JavaFX 1.2 and the command Timeline { toggle: true ... toogle doesn't exists anymore? Do you have any idea what I can use instead? Thx greetz Posted by: Corinne | June 15, 2009 at 07:47 AM Very nice article. I'm all agree with you. Posted by: Betsson08 | November 30, 2008 at 05:08 PM I'm fan of JavaFX. I developed some custom nodes and will publish them in very near future. MenuNode is alo fantastic. Also please add Linux support ASAP ;) Linux systems are getting more users day by day. Posted by: Betsson | November 24, 2008 at 09:48 AM Ram, The images for this article can be downloaded from the following location: That is a zip file that you can expand in the project's classpath. Posted by: Jim Weaver | August 15, 2008 at 04:37 PM Hello James, Very nice article. Where are the images located that are used in this example? I am learning JavaFx and love to try your example. Thanks, Ram Posted by: Ram | August 14, 2008 at 10:06 AM Thanks Raj! Posted by: Jim Weaver | July 28, 2008 at 02:53 PM Hi James, It look really nice. I am visiting your web site for first time and i found very informative. i need to start reading your articles from back :) hope i must not loose a single article. You write really nice and covers nice topics, please keep going. i be one of your regular reader from now :) Cheers!!! Raj Posted by: Raj | July 28, 2008 at 01:44 PM Jason Young wrote: ." Jason, Thanks, I'll do that! Jim Weaver Posted by: Jim Weaver | July 25, 2008 at 02:25 PM Bernard wrote: "Nice article. One question though: why is it preferred to use nodes rather than components? Is it because they are lighter weight? Could you please give me a link with more info on this topic?" and karenpp wrote: ?" I'm glad you both brought this up. The fact that JavaFX started out with components and nodes is because of Java's Swing and Java2D history. Working with these two "worlds" in the same language forces the developer to use adapters from one world to the other. For example, if you have some sort of container Component (say, a BorderPanel), and you want to draw in it, you have to place a Canvas in it and draw on the Canvas. Conversely, if you have a Canvas, and want to place a Swing component on it, you have to use a class (View in Java, ComponentView in JavaFX). The approach that I, and others, advocated was to make everything a node. Even layouts would be nodes, and would contain nodes. Currently, the JavaFX team is aggressively creating a set of skinnable UI controls that use CSS for styling. Here's a thread from the JavaFX GUI mailing list that discusses the first few node-based controls to be created. Meanwhile, of course, you can use the components in the javafx.ext.swing package. Thanks, and please let me know if you have any more feedback or questions! Jim Weaver Posted by: Jim Weaver | July 25, 2008 at 02:23 PM? Posted by: karenpp | July 25, 2008 at 12:05 PM Hi James, Nice article. One question though: why is it preferred to use nodes rather than components? Is it because they are lighter weight? Could you please give me a link with more info on this topic? Cheers, Bernard Posted by: Bernard Kushfing Zoopla | July 24, 2008 at 06:21 PM. Posted by: Jason Young | July 23, 2008 at 06:50 PM
http://learnjavafx.typepad.com/weblog/2008/07/rolling-your-ow.html
crawl-002
refinedweb
2,175
66.13
There's been a lot of hype lately about Java Data Objects (JDO). It appears to be the new silver bullet that will alleviate all of our coding drudgery. JDO threatens the livelihood of products such as object/relational mapping utilities that map Java objects to relational data. Because of this, and for other reasons, JDO has received more than its fair share of bad press. To be fair however, JDO does have it merits. My own take on why the specification evolved is simple: I too am tired of writing JavaBeans to present data from a relational database as objects in my Java programs. I've written lightweight JavaBeans, that simply hold data like a structure and have accessor and mutator methods, to heavyweight JavaBeans, that know how to retrieve and persist themselves. I have two issues with JDO. The first is that somehow SQL has become a bad thing. SQL remains one of the major breakthroughs in our industry. Its non-procedural mechanics freed us from the high priests that used to control persisted data, and its power to bring together formerly unrelated populations of information still make it the most powerful computing tool of our time. The second is that JDO's abstraction treats a Database Management System (DBMS) like a file. Restated, instead of treating a database like an extension of memory for your application, you end up treating it like a file in a file system, where you open the file, read it into your application, write any changes back out to disk, and then close the file. This kind of behavior does not properly facilitate multi-user access. Most importantly though, is that a workaround like JDO is indicative of a more fundamental problem: as object-oriented programmers, we have been leveraging object-orientation to improve how we build software, but we have not applied it to solving the business problems for which we write software. We are, for lack of a better term, stuck in the world of functional decomposition. Rather than view the world in a natural way, we abstract things in the real world by categorizing information about them, but ignore their behaviors. You can see evidence of this predominant way of thinking in the way we communicate. Take the word database for example. Our everyday language is imbued with this word. Object and object-relational persistence vendors call their solutions object database management systems, and object-relational database management systems. Even the employees at Oracle call their persistence solution an object-relational database. What we really need is a different term: objectbase. We need to move our thinking and our problem domain object models into an object persistence layer: an objectbase. JDO itself is a sentinel that we need to make the paradigm shift from a database -- a place where we persist information -- to an objectbase -- a place where we persist information and behavior. I'm fortunate to have lived through one such paradigm shift, from ISAM/VSAM files, hierarchal databases, and network databases to relational databases. So for me, the shift in thought is not so difficult. But what is an objectbase? To some, an objectbase is a place where you store and retrieve binary executables that can be accessed as needed anywhere at any time. Computers and networks will be extremely different than they are today when that definition of an objectbase is fulfilled. I, just like you, need a solution now. And that solution currently exists in the form of object-relational (OR) technology. With OR, you define object types in the objectbase, use a tool to generate your JavaBeans to mirror the objectbase types, and then use the Java Database Connectivity (JDBC) API, or SQLJ, to access objects from Java. Oracle's implementation of OR provides you with all of the strengths of their relational database, plus the ability to add methods to object types (or user defined types (UDTs), as they are called in SQL:1999). In this article, in an effort to give you a fair comparison of what is necessary to use OR with JDBC vs. JDO, I'll be showing you how to insert, update, and select a person object using JDBC in a similar fashion to "Using Java Data Objects," published recently on OnJava.com. The first step is to create a user defined type (UDT). To do so, you'll use the SQL CREATE TYPE command, as in this example: CREATE TYPE create type PERSON_TYPE as object ( name varchar2(255), address varchar2(255), ssn varchar2(255), home_phone varchar2(255), work_phone varchar2(255) ) The primary data types you have available to use in defining a new type are those supported by your OBMS vendor. For Oracle, the three most common primary types are: DATE for date and time values, NUMBER for numeric values, and VARCHAR2 for textual values. Here, I've defined a new type, named PERSON_TYPE. DATE NUMBER VARCHAR2 PERSON_TYPE The next step is to create a table based upon our new UDT. A table based upon a UDT is called an object table. To create an object table, you'll use the SQL CREATE TABLE command, as in this example: CREATE TABLE create table PERSON_OBJECT_TABLE of PERSON_TYPE Here, I've created an object table named PERSON_OBJECT_TABLE based upon the UDT PERSON_TYPE. This is the table I'll be working with in the example demonstration program. It's important to note that for this example, I've decided to use the object references created by the objectbase as a unique identifier for my object relationships. As an alternative, I could have just as easily used primary and foreign keys, and constraints, as I would with a relational database. PERSON_OBJECT_TABLE Once a UDT is created in your database, you can create a mirror class for your Java program by coding a JavaBean that implements the SQLData interface, or, if you're using Oracle, you can use the JPublisher Utility to generate a JavaBean for you. Example 1 is a hand-coded class, Person, to mirror the objectbase's UDT PERSON_TYPE. SQLData Person Class Person consists of: getSQLTypeName() readSQL() writeSQL() Example 1. The Person class import java.io.*; import java.sql.*; /** A mirror class to hold a copy of SCOTT.PERSON_TYPE */ public class Person implements SQLData, Serializable { private String = name; private String = address; private String = ssn; private String = email; private String = home_phone; private String = work_phone; public Person() { } // SQLData interface public String getSQLTypeName() throws SQLException { return "SCOTT.PERSON_TYPE"; } public void readSQL(SQLInput stream, String type) throws SQLException { name = stream.readString(); address = stream.readString(); ssn = stream.readString(); home_phone = stream.readString(); work_phone = stream.readString(); } public void writeSQL(SQLOutput stream) throws SQLException { stream.writeString(name); stream.writeString(address); stream.writeString(ssn); stream.writeString(email); stream.writeString(home_phone); stream.writeString(work_phone); } // Accessors public String getName() { return name; } public String getAddress() { return address; } public String getSsn() { return ssn; } public String getEmail() { return email; } public String getHomePhone() { return home_phone; } public String getWorkPhone() { return work_phone; } //) { home_phone = homePhone; } public void setWorkPhone(String workPhone) { work_phone = workPhone; } } Coding a JavaBean like this is not hard, but it is tedious. The readSQL() and writeSQL() methods simply read and write the data values from or onto their respective streams in the order in which the attributes exist in the UDT. Related Reading Java Programming with Oracle JDBC By Donald Bales Before you can use your mirror class to retrieve objects from, and to save objects to, an objectbase, you need to let the JDBC Connection you'll be using know that a mirror class exists, and to which UDT it should be mapped. To accomplish this, you'll update the connection's type map. Connection The type of Connection object, whether it comes from DriverManager or a DataSource, is of no consequence; either will work. In the demonstration program from Example 2, DemonstrateOR, I use a Connection object returned from DriverManager. First, the program gets the current type map from the connection by calling its getTypeMap() method. Next, the program adds an entry to the Map object using its put() method, passing the name of the UDT, and a copy of the mirror class, Person. Last, the program stores the updated type map in the connection by calling its setTypeMap() method, passing the updated Map object. At this point, the JDBC driver knows what Java class to instantiate when a copy of the UDT is retrieved from the objectbase, and vice versa. DriverManager DataSource DemonstrateOR getTypeMap() Map put() setTypeMap() Pages: 1, 2 Next Page © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onlamp.com/pub/a/onjava/2002/04/10/jdbc.html
CC-MAIN-2017-51
refinedweb
1,436
51.89
If you’ve read my previous blogs, you’ll know that I really like working with ABAP Objects. But still there are a few things that are simply missing – for example a decent inheritance / type checking. Perhaps some of the people who are responsible for the language itself are listening in and might want to comment or even add this item to their agenda… What I would really like to see is an extension to logical expressions that allow me to check whether a certain instance is a subclass of another class or implements a certain interface. For example: IF lr_foo IS INSTANCE OF cl_bar. IF lr_foo IS INSTANCE OF if_barable. I’d also suggest that a dynamic variant might be helpful: l_classname = 'CL_BAR'. " ...however this might be determined IF lr_foo IS INSTANCE OF (l_classname). I agree to the widespread notion that it’s usually not a good idea to rely on a certain inheritance structure when designing applications and coding generic stuff. Walking an object tree that was passed using generic references and casting stuff around might lead to very fragile applications – what if I relied on some internal class of the framework and that class changes or disappears altogether? Therefore the first example (IS INSTANCE OF some_class) is probably not the cleanest approach to object-orientation – but sometimes, a programmer’s got to do what a programmer’s got to do. On the other hand, being able to check whether an object implements a certain interface can be really helpful. Think about this: DATA: lr_displayable TYPE REF TO if_xyz_displayable. IF ir_object IS INSTANCE OF if_xyz_displayable. lr_displayable ?= ir_object. lr_displayable->display( ). ELSE. * perform some generic implementation ENDIF. Unfortunately, this statement is not (yet?) implemented. So what are the alternatives? The probably cleanest solution at the current time is to use the run-time type inspection (RTTI). This can be done roughly like this: DATA: lr_descriptor TYPE REF TO cl_abap_objectdescr, lr_displayable TYPE REF TO if_xyz_displayable. lr_descriptor ?= cl_abap_typedescr=>describe_by_name( 'IF_XYZ_DISPLAYABLE' ). IF lr_descriptor->applies_to( ir_object ) = abap_true. lr_displayable ?= ir_object. lr_displayable->display( ). ELSE. * perform some generic implementation ENDIF. This is definitely harder to read and to write than an INSTANCE OF operator would be, and the additional variable required doesn’t make things much better. Another way of dealing with this kind of issue seems to originate from the fans of Demolition Derby. Simply put: Try to cast the instance, and if that fails, it wasn’t one of ours: DATA: lr_displayable TYPE REF TO if_xyz_displayable. TRY. lr_displayable ?= ir_object. lr_displayable->display( ). CATCH cx_sy_move_cast_error. * perform some generic implementation ENDTRY. While this implementation does get rid of the additional variable, it also neatly disposes of code legibility. In my opinion, it’s like catching a CX_SY_ZERODIVIDE to check whether a number is zero. But there’s one idea that is so overwhelmingly mind-boggling that I just have to mention it: Yep, that’s right. Force every programmer in your vincinity to implement that interface and to implement the methods correctly so that your inheritance checking actually works. Waste an unknown amount of time ensuring that GET_TYPE – which actually returns an integer – returns unique values. Even add a customizing table so that customers can ‘reserve’ ‘namespaces’ for their own subclasses of framework classes. And it doesn’t even solve the problem of interface implementation checking. I sincerely hope there was a good reason to choose this design back in 2003 and an even better reason not to remove it in the meantime… CATCH cx_sy_move_cast_error. Object problem CL_WDY_WB_REFLECTION_HELPER=>IS_INSTANCE_OF is not only discouraged, it’s also simply a wrapper around the try-crash – ehm – try-catch-pattern I mentioned, with some generics on top. No improvement as far as I can see… Volker I am starting my road down ABAP Objects. It’s a long and winding one. Project deadlines, and all that. I do use them, sometimes even correctly. I’m bookmarking for later. Thank you for a great blog! Michelle
https://blogs.sap.com/2011/10/03/abap-wishlist-is-instance-of/
CC-MAIN-2017-51
refinedweb
654
55.13
jidanni@jidanni.org wrote: > Thanks but junior user me is not ready to compile anything. No junior user is ready to modify resources files, while anybody is able to enter 5 command lines that I gave you in the previous mail. What's the point of reporting bugs and then refusing to help debug them? > However > I'm convinced that the problem is merely that your good intentioned > effort to fill in the unbalanced lack of > > 38a39 > >> #endif >> > > Our good intentioned effort comes from upstream, we don't patch Xresources files without a good reason. They broke the resources file in 1.1.6 and fixed it right after. > in /etc/X11/xdm/Xresources, ended up being put in too early in the > file. Closing the wrong #if. > > This causes innocent "800" users to end up swallowing a lot of code > meant for "> 800" screens. Test it on a 800x600. It goes off the edge > of the screen! > Please provide a patch, junior user me is not ready to test/debug xdm in 800x600 mode. > And on >800 screens I still have to comment out > !xlogin*failFace: Helvetica-18:bold > to make the bevel problem go away. > Please don't mix multiple bug reports. You already said that in another report, we read it. Thanks, Brice
https://lists.debian.org/debian-x/2007/09/msg00035.html
CC-MAIN-2016-22
refinedweb
216
75.61
That's The Way Love Goes I've Been Throwin' Horse Shoes Over My Left Shoulder ...As you might know, we cover the international capital flow statistics on a pretty darn regular basis. In the spirit and mantra of "thinking globally", we personally consider this little exercise a must. And we really cannot see how that's going to change in the future. A few weeks back the June global capital flow statistics that pertain to the US financial markets and assets were released. Although we always get the numbers with a lag from our wonderful friends at the Treasury department, making them a bit meaningless in terms of day-to-day decision-making worth, it's the longer term trends that we believe are very important. And it just so happens that the June numbers contain a few very notable anomalies. Anomalies that just may be markers of meaningful change in global capital flows. We'll move through this quickly. Again, it's the change in longer-term trends that we're after here and how those changes may influence the US capital and broader financial markets ahead. Those are the important issues. Anomaly number one is the fact that in June there was actually a drop in UST holdings among the "major" foreign holders of UST's. There was not a drop in aggregate foreign UST holdings, but again a drop among the major foreign holders (we list the specific major foreign holders in a table below). The issue that caught our eye was that we have not experienced this magnitude of a month over month drop in Treasury holdings among the major foreign players since March of 2000. And interesting date to say the least. The second issue, which we will cover in a bit more detail, is the fact that the foreign community purchased a record amount of US corporate debt in June. $52+ billion dollars worth of corporate bond purchases to be exact. That's far from insignificant. What we've seen so far in 2005 is the foreign community losing the love they have so dutifully expressed for US Treasuries over the last three to four years and finding new romance with the US corporate fixed income sector. Moreover, the leading cast of lovers is changing. Europe has emerged as the new US financial asset Don Juan while Asia (largely Japan) has adopted the role of wallflower for the moment. Oh well, that's the way love goes, right? Again, in aggregate, it's not so much that total foreign community enthusiasm for US financial assets is waning, but rather that the cast of buyers is changing more than noticeably. We fully understand the Asian motive behind supporting the US dollar vis-à-vis the purchasing of US financial assets over time, but what is motivating Europe to pick up the eye opening slack created by the Asians? Quite unfortunately, in this discussion we probably have more questions than answers. But, as always, we hope that asking the right questions is more than half the battle in terms of correctly anticipating change in the broader financial markets. First, let's have a quick look at what our for now former primary bankers in the Asian community have been doing as of late in terms of purchasing US Treasury securities. We've been tracking the top Asian players in the US Treasury market and have been "marking" their activity since November of last year. For 2004, November just happened to have been the peak month for combined Chinese and Japanese holdings of UST's. Here are the numbers: Over the last seven months (since last November), this group of Asian heavy hitters has collectively purchased all of $20.5 billion in US Treasuries. Given that this is an aggregate seven month purchase number, the annualized like time period equivalent clocks in at $35.1 billion. On a current "run rate" basis, so to speak, this is now where we stand. For a bit of glaring perspective, just compare it to the annual historical calendar numbers you see below. Again, this is the collective calendar based UST purchasing by the total Asian bloc you see in the table above: In other words, on an annualized run rate basis over the last seven months, the Asian community has virtually disappeared in terms of being meaningful Treasury buyers. This IS a big change. So, just who have been the big buyers who have picked up the slack in terms of foreign purchasing of US Treasuries? Just have a look. (By the way, these are the "major" foreign holders of US Treasuries we referred to above who collectively sold Treasuries in June of a magnitude not seen since 3/00.) There is no question that the former 800 pound gorilla UST buyer, Japan, is sitting out the current inning. As designated pinch hitter for Asia, China has firmly gripped the bat and stepped up to the plate. Above and beyond that, the UK has no peer in UST buying over the last seven months. And our wonderful friends in the Caribbean Banking Centers (thought to be the hedge funds) have been lending a friendly hand right alongside, but to a meaningfully lesser extent. But with Japan out of the game for now, the long term trailing twelve month buying of Treasuries by the foreign community in aggregate is heading south in rather abrupt fashion as we speak. Over the past twelve months, China has actually purchased more UST securities than has Japan. Maybe only fitting since the US trade deficit with China has been a mushroom cloud. Very quickly, net foreign buying of US stocks in June literally was a rounding error. It clocked in at $107 million. Hardly worth mentioning. And in government agency land, the twelve month rate of change in foreign buying is continuing to fall from what had been a very high level. It's in the world of US corporate bonds where the foreign community has really been stepping on the accelerator in 2005. Just have a peek at the chart below for a little perspective on June activity relative to historical precedent. Off the charts pretty much describes it. And what's a bit amazing is that, as you know, from an historical standpoint, yield spreads between US Treasury and domestic corporate debt is pretty darn tight these days. The current yield spread between Moody's Aaa debt and Treasuries is as tight as anything seen since 1997. And Moody's Baa debt is not much of a bargain either based on the yield spread relative to Treasuries. Why the heavy buying of corporates by the foreign community? Although we're really searching for many an answer in looking at all of this data ourselves, just maybe relative currency movements can help explain some of the recent activity. It just so happens that as we look over the entire last twelve months for the period ended 6/30/05, Europe was the largest single bloc buyer of Treasuries, US corporate bonds and US common stocks. And to be honest, they were a very close second to Asia in terms of the purchase of US government agency paper. As opposed to Asia having bought US financial assets in essence to attempt to support a sagging US dollar in the past, is the European community now piling into US financial assets to participate in a dollar that has strengthened over the first six months of the year relative to the Euro? In other words, is Europe looking for a return on investment? Asia was attempting to support their export driven economies. Two very different agendas. We're hoping that the answers to some of these questions will be revealed with the release of the July global capital flows data in a few weeks. As you know, the YTD rally in the US dollar recently topped literally on the first day of July. So in July we have a declining dollar and a strengthening Euro. Will the Asian and European community switch places in terms of buying US financial assets in July? We suggest it will be more than interesting to find out. Although currency movements have probably been a big driver in terms of the behavior and character of foreign purchasing of US financial assets YTD, the real question looking ahead is whether the European community has the staying power to continue purchasing US financial assets if the Asian community continues to stay away from the game as they clearly have for a good seven months now. For now, the headline numbers in terms of global capital flows into US financial assets look fine. The numbers are large and more than offset the trade flow imbalances. The only issue in terms of trying to match global flows of capital with goods trade imbalances is that it's Asia with which we have the massive trade imbalance, not Europe. That suggests recent European buying of US financial assets is investment based as opposed to a mercantilist economic practice. And if that's indeed the case, it would seem reasonable to believe that the Europeans will only stick around for a favorable rate of return and no more, whether the return is investment or currency related, or both. That has NOT been the case with Asian intentions and resulting US financial asset buying practices over the last three to four years. Lastly, among the recent European community of buyers, private sector buying has outweighed that being done by "official institutions" (central banks). Again, another big difference relative to former Asian buying. This really suggests the motivation of the European buying YTD is much different than has been the case with Asia over the last three years. Much different. So although the headlines regarding capital flows to US financial markets look good, it's what's happening beneath the headlines that really counts. And below those headlines are some major shifts occurring YTD relative to our experience of the last three years. One last comment prior to our leaving this subject. It's often said that America is borrowing the savings of the global economy vis-a-vis the trade deficit and recycling of US dollars back into US financial assets on the part of the foreign community. In other words, it's a good bit of conceptual vendor financing. We've even said it ourselves in the past. In part, there is definitely some truth to these comments. But what is also true is that to a very large extent, the global central banks, largely the Bank of Japan and the People's Bank of China, are creating liquidity (money) with which they are purchasing US financial assets. The following chart is an update of the importance of "foreign official institutions" (central banks) to the buying of US Treasury securities. One quick question. Without central bank buying of US Treasuries over the last few years, just where would Treasury yields be today? To be honest, we don't mean this question to be bearish or pessimistic. Truth be told, we're almost stunned longer term UST yields have stayed low YTD in the absence of Asian buying since late 2004. In our minds, that may really be the real conundrum of the moment. We have the feeling that at this point, since investors have "learned" over the past few years that spikes in 10 year Treasury rates have been short lived, it's going to take a sustained backup in rates that plateau say at or above 4.75% on the 10 year to bring in the real selling. And, of course, we have no idea if this is going to happen. But based on the current character of global flows of capital we discussed above, we'd suggest that the Treasury market may be at a more heightened level of supply/demand risk today (that ultimately translates into price) than at any time over the past three+ years given that the folks who don't really care about a rate of return (Asia), due to their practice of mercantilist economics, are no longer the significant buyers, and those that perhaps do care a lot (the private European sector) about absolute rate of return have been the marginal buyers of Treasuries so far this year. Implicitly, when it comes to the greater foreign community financing the ongoing US trade and government deficit, the game has changed and it seems so have the risks. That's The Way Love Goes ...We know that the US has become quite dependent on foreign flows of capital to finance domestic spending, funding of operations in Iraq, we could go on an on. Our trade and fiscal deficits are simply testimony to this fact. And that makes watching for changes in trend of global flows of capital extremely important both now and as we look ahead. We have a very hard time believing that the European community would have any interest in financing the longer term US fiscal and trade deficits via their continued buying of US financial assets under any circumstances, as the Asian community has really been doing for some time now, with the simple exception of the profit motive. Again, are we experiencing significant shifts in foreign flows of capital into US markets so far in 2005? Absolutely. But the real test of meaningful and significant change lies dead ahead when the global capital flow numbers for July, August and beyond become available. So far, movements in the Euro and the US dollar go a long way toward explaining the change we have seen in European and Asian purchasing of US financial assets YTD, as is clear in the chart below. For now, the 50 day moving averages of both the Euro and USD deserve watching. Admittedly, from a longer term perspective the Euro looks pretty darn oversold relative to the US dollar. Importantly, as we look ahead, if the dollar weakens from here, will the Asian community revert to their dollar propping ways of the last three to four years? Will mercantilist economic practices once again rear its head? Or is the dramatic change we have experienced in collective Asian purchasing of Treasuries, or really lack thereof, since last November the beginning of a very important shift in global capital flows that have really been vital to the US economy and financial markets up to this point? Although we're clearly jumping the gun in asking these questions at the moment, the answers lie in our near future. And we suggest that the answer to these questions will be quite meaningful for not only US financial asset opportunities and risks moving forward, but also the reality of the US economy. Stay tuned. TweetTweet
http://www.safehaven.com/article/3708/thats-the-way-love-goes
CC-MAIN-2017-22
refinedweb
2,465
58.21
Table of Contents JHBuild is a tool designed to make building collections of source packages (also known as modules). It uses “module set” files to describe the modules available to build. These files include dependency information that allows JHBuild to work out what modules need to be built and in what order to build what the user requested. JHBuild was originally written for building Gnome, but has since been extended to make it usable with other projects. A “module set” file can be hosted on a web server, allowing people to provide build rules independent of the JHBuild. JHBuild can build modules from a variety of sources, including: JHBuild is not intended as a replacement for the distribution's package management system. Instead, it makes it easy to build everything into a separate install prefix so that it doesn't interfere with the rest of the system. JHBuild takes a bit of work to set up on a system. As well as installing JHBuild's prerequisites, it is necessary to install the prerequisite tools needed to build the software in CVS (or where ever else it is stored). Before downloading JHBuild, you should make sure you have a copy of Python >= 2.0 installed on your system. It is also essential that the Expat XML parser extension is installed. This will be the case if you are using Python >= 2.3, or had expat installed when building Python. You can check whether this is the case by running the following simple command from the Python interpreter: >>> import xml.parsers.expat >>> If this completes without an exceptiopn, then it is installed correctly. At the moment, the only way to download JHBuild is via CVS. This can be achieved with the following commands. They should be run in the directory where jhbuild will be installed (for example, ~/cvs/gnome2). $ cvs -d :pserver:anonymous@anoncvs.gnome.org:/cvs/gnome login Logging in to :pserver:anonymous@anoncvs.gnome.org:2401/cvs/gnome CVS password: press enter $ cvs -d :pserver:anonymous@anoncvs.gnome.org:/cvs/gnome checkout jhbuild $ This will download JHBuild into a jhbuild folder under the current directory. Now to build and install it: $ cd jhbuild $ make ... $ make install ... $ If these steps complete successfully, a small shell script should be installed in ~/bin to start JHBuild. If this directory is not in the PATH, it will need to be added (possibly by editing ~/.profile or ~/.bashrc). Before JHBuild can be run, it will be necessary to set up a ~/.jhbuildrc file that configures how JHBuild will behave. The ~/.jhbuildrc file uses Python syntax to set a number of configuration variables for JHBuild. A minimal configuration file might look something like this: moduleset = 'gnome-2.10' modules = [ 'meta-gnome-desktop' ] checkoutroot = os.path.join(os.environ['HOME'], 'cvs', 'gnome2') prefix = os.path.join(os.environ['HOME'], 'prefix') os.environ['INSTALL'] = os.path.join(os.environ['HOME'], 'bin', 'install-check') This will get JHBuild to build the meta-gnome-desktop module (and its dependencies) from the gnome-2.10 module set. It will unpack source trees to ~/cvs/gnome2 and install modules to ~/prefix. It also sets the INSTALL environment variable to a program that handles installation of headers specially in order to decrease the work during a rebuild. Some of configuration variables available include: Before any modules can be built, it is necessary to have certain build tools installed. These include the GNU auto tools (autoconf, automake, libtool and gettext), pkg-config nad Python. JHBuild can check if your distro has installed these tools using the sanitycheck command: $ jhbuild sanitycheck If this command prints any messages, these can be fixed in one of two ways: The bootstrap command can be invoked like so: $ jhbuild bootstrap This will download and install all the build prerequisites. Once it is finished, the sanitycheck command should be rerun to verify that everything is in place. The bootstrap command does not build all the packages required by these tools. If the OS does not provide those packages, then they will need to be built separately. Some packages to check for include m4, perl and a C compiler. Now that everything is set up, JHBuild can be used to build some software. To build all the modules selected in the ~/.jhbuildrc file, run the following command: $ jhbuild build This will download, configure, compile and install each of the modules. If an error occurs at any stage, JHBuild will present a menu asking the user what to do. These choices include dropping to a shell to fix the error, rerunning the build stage, giving up on the module (which will also cause any modules depending on it to fail), or ignore the error and continue. It is also possible to build a different set of modules (and their dependencies) by passing their names as arguments to the build command: $ jhbuild build gtk+ If you exit JHBuild part way through a build for some reason, it is possible to pick up a build at a particular package using the --start-at option: $ jhbuild build --start-at=pango To build one or more modules, without their dependencies, the buildone command can be used: $ jhbuild buildone gtk+ To get a list of the modules jhbuild will build, and the order they will be built in, use the list command: $ jhbuild list To get information about a particular module, the info command can be used: $ jhbuild info gtk+ If your internet bandwidth varies, you can get JHBuild to download or update all the software it will build in one go without actually building it: $ jhbuild update Later on, you can tell JHBuild to build everything without downloading or updating: $ jhbuild build --no-network If you want to run a particular command with the same environment variables set that JHBuild uses, use the run command: $ jhbuild run program To start a shell with that environment, use the shell command: $ jhbuild shell JHBuild uses a command line syntax similar to tools like CVS: jhbuild [global-options] command [command-arguments] The global jhbuild options are: Command specific options are listed below. The bootstrap command is used to install a set of build utilities required to build most modules (eg. autoconf, automake, etc). jhbuild bootstrap Internally bootstrap is implemented using the same code as build, using the bootstrap.modules moduleset. The build command is used to build one or more packages, including their dependencies. jhbuild build [--autogen] [--clean] [--no-network] [--skip=module...] [--start-at=module] [-D date] [module...] If no module names are given on the command line, then the module list found in the configuration file will be used. The buildone command is similar to build, but it does not use dependency information to expand the module list. It is useful for quickly rebuilding one or more modules. jhbuild buildone [--autogen] [--clean] [--no-network] [-D date] module... The --autogen, --clean, --no-network and -D options are processed the same as for build. Unlike build, at least one module must be listed on the command line. The dot command generates a file describing the directed graph formed by the dependencies between a set of modules. This file can then be processed using the GraphViz software to produce a nice diagram. jhbuild dot [module...] If no module names are given on the command line, then the module list found in the configuration file will be used. The output of this command can easily be piped to the dot utility to generate a postscript file: $ jhbuild dot modules | dot -Tps > dependencies.ps The info command is used to display information about one or more modules. jhbuild info module... The command prints the module name, type, dependencies, dependent packages, and the time it was last installed with JHBuild. It may also print some information specific to the module type, such as the CVS repository or download URL. The list command is used to show the expanded list of modules the build command would build. jhbuild list [--show-revision] [module...] If no module names are given on the command line, then the module list found in the configuration file will be used. The run command is used to run an arbitrary command using the same environment as JHBuild uses when building modules. jhbuild run program [argument...] If using JHBuild to build Gnome, this command can be useful in X startup scripts. The sanitycheck command performs a number of checks to see whether the build environment is okay. jhbuild sanitycheck Some of the checks include: The shell command starts the user's shell with the same environment as JHBuild uses when building modules. jhbuild shell This command is roughly equivalent to the following: $ jhbuild run $SHELL] [--no-network] [--output=directory] [--skip=module...] [--start-at=module] [-D date] [module...] The --autogen, --clean, --no-network, --skip, --start-at and -D options are processed the same as for build. The update command is similar to build, but only performs the download or update stage for modules without building them. jhbuild update [--skip=module...] [--start-at=module] [-D date] [module...] The --skip, --start-at and -D options are processed the same as for build. The updateone command is similar to update, but it does not use dependency information to expand the module list. It is useful for quickly updating one or more modules. jhbuild updateone [-D date] module... The -D option is processed the same as for update. Unlike update, at least one module must be listed on the command line. The ~/.jhbuildrc file uses standard Python syntax. The file is run, and the resulting variables defined in the namespace are used to control how JHBuild acts. A set of default values are inserted into the namespace before running the user's configuration file. In addition to the above variables, there are some other things that can be set in the configuration file: This is dictionary represents the environment of the process (which also gets passed on to processes that JHBuild spawns). Some environment variables you may want to set include CFLAGS, INSTALL (to use the more efficient install-check program included with JHBuild) and LDFLAGS. This will add a directory to a PATH-style environment variable. It will correctly handle the case when the environment variable is initially empty (having a stray colon at the beginning or end of an environment variable can have unexpected consequences). This function has special handling for the ACLOCAL_FLAGS environment variable, which expects paths to be listed in the form -I pathname. After processing the configuration file, JHBuild will alter some paths based on variables such as prefix (eg. adding $prefix/bin to the start of PATH). The prependpath function works like addpath, except that the environment variable is modified after JHBuild has made its changes to the environment. JHBuild uses simple XML files to describe the dependencies between modules. A RELAX-NG schema and Document Type Definition are included with JHBuild in the modulesets/ directory. The RELAX-NG schema makes it trivial to edit module set files using nxml-mode in Emacs. The toplevel element in a module set file is moduleset element. Currently no XML namespace is used, but in the future this might change. The elements below the toplevel come in three types: module sources, include statements and module definitions. Rather than listing the full location of every module, a number of "module sources" are listed in the module set, and then referenced by name in the module definitions. As well as reducing the amount of redundant information in the module set, it makes it easy for a user to specify an alternative source for those modules (for CVS and Subversion, it is common for developers and users to use different repository access methods). The cvsroot element is used to describe a CVS repository. <cvsroot name="rootname" [ The name attribute should be a unique identifier for the CVS repository. If default attribute says whether this is the default module source for this module set file. The root attribute lists the CVS root used for anonymous access to this repository, and the password attribute gives the password used for anonymous access. The svnroot element is used to describe a Subversion repository. <svnroot name="rootname" [ The name attribute should be a unique identifier for the Subversion repository. If default attribute says whether this is the default module source for this module set file. The href attribute lists the base URL for the repository. This will probably be either a http, https or svn URL. The arch-archive element is used to describe a GNU Arch archive. <arch-archive The name attribute should be the Arch archive name. If default attribute says whether this is the default module source for this module set file. The href attribute lists a public mirror URL for the archive. JHBuild allows one module set to include the contents of another by reference using the include element. <include href="uri"/> The href is a URI reference to the module set to be included, relative to the file containing the include element. Only module definitions are imported from the referenced module set — module sources are not. Multiple levels of includes are allowed, but include loops are not (there isn't any code to handle loops at the moment). There are various types of module definitions that can be used in a module set file, and the list can easily be extended. Only the most common ones will be mentioned here. The cvsmodule element is used to define a module that is to be built from CVS. <cvsmodule module="modulename" [ ... </dependencies> <suggests> <dep package="modulename"/> ... </suggests> </cvsmodule> The module, revision and root attributes identify the module to check out from CVS. The checkoutdir attribute can be used to specify an alternative directory to check out to (by default, the value of module is used). The autogenargs, makeargs and supports-non-srcdir-builds attributes are common to many different module types. The autogenargs attribute lists additional arguments to be passed to autogen.sh, and makeargs lists additional arguments to be passed to make. The supports-non-srcdir-builds attribute is used to mark modules that can't be cleanly built using a separate source directory. The dependencies and suggests elements are used to declare the dependencies of the module. Any modules listed in the dependencies element will be added to the module list for jhbuild build if it isn't already included, and make sure the dependent modules are built first. After generating the modules list, the modules listed in the suggests element will be used to further sort the modules list (although it will not pull any additional modules). This is intended for cases where a module has an optional dependency on another module. The svnmodule element is used to define a module that is to be built from Subversion. <svnmodule module="modulename" [ ... </dependencies> <suggests> <dep package="modulename"/> ... </suggests> </svnmodule> The module attribute gives the path of the module relative to the repository URI. All other options for this element are processed as for cvsmodule. The archmodule element is used to define a module that is to be built from a GNU Arch archive. <archmodule version="modulename" [ ... </dependencies> <suggests> <dep package="modulename"/> ... </suggests> </archmodule> The version attribute gives the version to be checked out from the archive specified by root. All other options for this element are processed as for cvsmodule. The tarball element is used to define a module that is to be built from a tarball. <tarball id="modulename" [ ... </patches> <dependencies> <dep package="modulename"/> ... </dependencies> <suggests> <dep package="modulename"/> ... </suggests> </tarball> The id and version attributes are used to identify the module. The source element specifies the file to download and compile. The href attribute is mandatory, while the size and md5sum attributes are optional. If the last two attributes are present, they are used to check that the source package was downloaded correctly. The patches element is used to specify one or more patches to apply to the source tree after unpacking. The patch files are looked up in the jhbuild/patches/ directory, and the strip attribute says how many levels of directories to prune when applying the patch. The other attributes and the dependencies and suggests sub-elements are processed as for cvsmodule. The metamodule element defines a module that doesn't actually do anythin. The only purpose of a module of this type is its dependencies. <metamodule id="modulename"> <dependencies> <dep package="modulename"/> ... </dependencies> <suggests> <dep package="modulename"/> ... </suggests> </metamodule> The id attribute gives the name of the module. The child elements are handled as for cvsmodule.
http://www.gnome.org/~jamesh/jhbuild.html
crawl-001
refinedweb
2,753
55.24
Important: Please read the Qt Code of Conduct - QML Call function from another QML Hi. I have again a question, which I haven't found solution for now. I need to call in specific event function from another QML object. I have this: main.qml import QtQuick 2.2 import QtQuick.Window 2.10 .... Window { id: root visible: true .... StackView { id: mainStackView; anchors.fill: parent initialItem: mainPage } Component { id: mainPage CalendarView {} } Component { id: viewPage //property alias viewComponent: viewPageEventViewElement EventView { id:viewPageEventViewElement } /*onCompleted: { viewPageEventViewElement.setEvent(calendarWindow.currentEvent); }*/ /*function setEvent(modelobj) { viewPageEventViewElement.setEvent(modelobj); }*/ } } CalendarView.qml import QtQuick 2.2 .... Item { id: calendarWindow .... //property var currentEvent .... MouseArea { .... onClicked: { //calendarWindow.currentEvent = modelData if (mouse.button === Qt.RightButton) { .... } else { mainStackView.push(viewPage); //viewPageEventViewElement.setEvent(modelData); } } .... } .... } EventView.qml import QtQuick 2.2 .... Item { id: eventWindow .... function setEvent(modelobj) { .... } So, I've tried to make property for component in main.qml - I've got crashing with 255. When I comment line with property, program works (but not events). Also, I've tried to call event using component ID - viewPageEventViewElement. It says that reference for this ID not defined. I've tried to use properties in CalendarView.qml (currentEvent property), and call it from main.qml, and I've got same as first error - program doesn't running, and sends 255 error. I've heard about signals, but I think (and I hope) it has another easy looking solution for this, because I need an argument for function too. Thank you. - Shrinidhi Upadhyaya last edited by Shrinidhi Upadhyaya Hi @AriosJentu , i guess you just need to call a function of another Class. I have made few changes to your code and have written a sample code:- CalendarView.qml Item { id: calendarWindow height: 100 width: 100 //property var currentEvent //####Display Purpose Rectangle { anchors.fill: parent color: "red" } MouseArea { onClicked: { //calendarWindow.currentEvent = modelData if (mouse.button === Qt.RightButton) { } else { mainStackView.push(viewPage); //viewPageEventViewElement.setEvent(modelData); } } } function setEventCV(txt) { console.log(txt) } } EventView.qml Item { id: eventWindow height: 100 width: 100 //####Display Purpose Rectangle { anchors.fill: parent color: "green" } function setEventEV(txt) { console.log(txt) } } main.qml StackView { id: mainStackView; anchors.fill: parent initialItem: mainPage } Component { id: mainPage Row { spacing: 10 EventView { Component.onCompleted: { setEventEV("Event View"); } } CalendarView { Component.onCompleted: { setEventCV("Calendar View"); } } } } Sample Output:- Thank you. But, as you see, I need to call function from CalendarView class for EventVIew class. Here - when I press something on CalendarView, I've send event with some parameters of CalendarView to EventView, and I think, your code isn't work in my situation. But also, thank you, I've understood how it works using it in Main class. - Shrinidhi Upadhyaya last edited by Hi @AriosJentu , i guess you can do that:- Give an id to EventView in main.qml, something like this:- EventView { id: ev } Make this code change inside CalendarView.qml MouseArea { anchors.fill: parent onClicked: { ev.setEventEV("EventView"); } } So when you click on the red Rectangle, you will get a log "EventView":- - AriosJentu last edited by AriosJentu Also, thank you. But I've already got to EventView it's id, but I can't call it from another file: main.qml: .... Component { id: viewPage EventView { id: viewPageEventViewElement } } .... I've also added to EventView.qml function just to send info in log: EventView.qml: .... function sendMessage(message) { console.log(message); } .... And then I call this function for object with viewPageEventViewElement id in my CalendarView.qml: MouseArea { onClicked: { //calendarWindow.currentEvent = modelData if (mouse.button === Qt.RightButton) { } else { mainStackView.push(viewPage); viewPageEventViewElement.sendMessage("Hello world!"); //<<<< Line 277 //viewPageEventViewElement.setEvent(modelData); } } } And I've got this: qrc:/qml/CalendarView.qml:277: ReferenceError: viewPageEventViewElement is not defined Same as start of the topic. I think, maybe it will be better send my EventView class outside of it's component in main.qml, but I need to add it inside component using ID, but I don't know, how. Also, check full code you can here: My GitHub Found solution. I've just used StackView's parameter - currentItem, and for it I called a setEvent method: MouseArea { onClicked: { if (mouse.button === Qt.RightButton) { } else { mainStackView.push(viewPage); mainStackView.currentItem.setEvent(modelData); } } } Thanks.
https://forum.qt.io/topic/103891/qml-call-function-from-another-qml/2
CC-MAIN-2022-05
refinedweb
686
53.58
Introduction Smart device applications support many remote concept applications. It connect the databases in various ways normally it contain local databases stored in Temp folder. It maintain temp database. Smart device applications is of two types ADO.Net architecture support Sqlserver and SqlserverCE. It is used for operations such as insert, update and delete etc. SqlserverCE is used for CE based applications that is disconnected from Sqlserver databases. Sqlclient is used for CE based applications that is direct connection to Sqlserver databases. Pocket PC is just like emulator. It reflect mobile operation. Normally smart device contain local database with help of SQLServerCE. Deploying this application will create own framework and SQLCE. For connecting to remote or our local temp database we will use SqlserverCE or Sqlclient. SqlserverCE is used to run the application in WindowsCE based platforms. Interfaces needed to interact with SqlServerCE Replication and RDA are SqlCEReplication and SqlCERemoteDataAccess. SqlCEEngine class used to enable databases. Why we prefer SqlserverCE Normally Sqlclient on windows CE-based devices may leverage the Windows authentication protocol, instead of using SQL Server authentication. Connection string cannot support AttatchDBFile, Max Pool Size, Connection Lifetime, Connection Reset and Pooling. We can connect to remote Sql database using two methods. Replication It enable to access Sqlserver. Replication is message based data access. SqlserverCE synchronize with Sqlserver by using HTTP connection with help of IIS. RDA Method RDA tracked PULL and PUSH methods use optimistic concurrency control. PULL Method An application calls the PULL method to extract data from Sqlserver database and store the data in SqlserverCE. PUSH Method An application calls the PUSH method to transmit changes from a PULLED tracked table in SqlserverCE back to Sqlserver table. Open the Smart device application we will set some data provider references. Now we will see the normal database connection with in the local sqlserverCE. The below example describe how to insert the values in local .sdf database and how to delete the records in that table. Click Project menu and click Add Reference. Microsoft.WindowsCE.Forms System.Data.Common System.Data.SqlServerCE System.Windows.Forms Set the references then we will import SqlServerCE namespace. Imports System.Data.SqlServerCE Let's start with the traditional "Employee Details" application, Creating an application on the desktop using Visual Studio .NET Create a new SDE project using Visual Basic .NET Open the "New Project" dialog and select-from "Visual Basic Projects" - the "Smart Device Application" template (see Figure 1). Fig 1. New Project dialog for Smart Device Application Give your application a name (such as "Employee Details"). This will create a new directory under location to contain your application's source code. Then click OK. This starts the "Smart Device Application Wizard". Then choose specfic type, Pocket PC or Windows CE. Fig 2. Smart Device Application Wizard Choose the Platform (use Pocket PC if you will be using the emulator). Choose the Project Type. For our first simple application-which simply displays a form on the device-choose "Windows Application". Then click OK. In form load we will create SqlConnection before that create one SqlserverCE engine. it will create temp database with in emulator. Dim engine As New System.Data.SqlServerCe.SqlCeEngine("data source=\temp\login.sdf") engine. CreateDatabase() Then we will create table after opening the connection to the database. con. Open() Dim str As String = "create table login(name ntext,pass ntext)" Click the Ok button it will insert in local DB. Dim str As String = "insert into register values('" & user.Text & "','" & pass.Text & "')" sqlcmd.CommandText = str sqlcmd.ExecuteNonQuery() con.Close() In the above process, repeat with delete process. Dim con As New System.Data.SqlServerCe.SqlCeConnection("data source=\temp\login.sdf ") con.Open() Dim sqlcmd As System.Data.SqlServerCe.SqlCeCommand = con.CreateCommand() sqlcmd.CommandText = "delete from register where name ='" & user. Text & "' " con.Close() ©2014 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/UploadFile/senthurganesh/smart-device-application-with-VB-Net/
CC-MAIN-2014-52
refinedweb
644
52.15
Antoine Pitrou wrote: > Le Fri, 12 Feb 2010 17:14:57 +0000, Steven D'Aprano a écrit : >>. > > Well, I think the Ruby people got it right. Python *does* pass parameters > by reference. After all, we even speak about reference counting, > reference cycles, etc. > > So I'm not sure what distinction you're making here. > He's distinguishing what Python does from the "call by reference" which has been used since the days of Algol 60. As has already been pointed out, if Python used call by reference then the following code would run without raising an AssertionError: def exchange(a, b): a, b = b, a x = 1 y = 2 exchange(x, y) assert (x == 2 and y == 1) Since function-local assignment always takes place in the function call's local namespace Python does not, and cannot, work like this, and hence the term "call by reference" is inapplicable to Python's semantics. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 PyCon is coming! Atlanta, Feb 2010 Holden Web LLC UPCOMING EVENTS:
https://mail.python.org/pipermail/python-list/2010-February/567909.html
CC-MAIN-2014-15
refinedweb
176
69.82
Provided by: iproute2_4.3.0-1ubuntu3_amd64 NAME tc - show / manipulate traffic control settings SYNOPSIS tc [ OPTIONS ] qdisc [ add | change | replace | link | delete ] dev DEV [ parent qdisc-id | root ] [ handle qdisc-id ] qdisc [ qdisc specific parameters ] tc [ OPTIONS ] class [ add | change | replace | delete ] dev DEV parent qdisc-id [ classid class-id ] qdisc [ qdisc specific parameters ] tc [ OPTIONS ] filter [ add | change | replace | delete ] dev DEV [ parent qdisc-id | root ] protocol protocol prio priority filtertype [ filtertype specific parameters ] flowid flow-id tc [ OPTIONS ] [ FORMAT ] qdisc show [ dev DEV ] tc [ OPTIONS ] [ FORMAT ] class show dev DEV tc [ OPTIONS ] filter show dev DEV OPTIONS := { [ -force ] -b[atch] [ filename ] | [ -n[etns] name ] | [ -nm | -nam[es] ] | [ { -cf | -c[onf] } [ filename ] ] } FORMAT := { -s[tatistics] | -d[etails] | -r[aw] | -p[retty] | -i[ec] | -g[raph] } DESCRIPTION Tc Whereas 'queueing discipline' and it is elementary to understanding traffic control. Whenever the kernel needs to send a packet to an interface, it is enqueued to the qdisc configured for that interface.. CLASSES A filter is used by a classful qdisc to determine in which class a packet will be enqueued. Whenever traffic arrives at a class with subclasses,-index(8). u32 Generic filtering on arbitrary packet data, assisted by syntax to abstract common operations. See tc-u32 configured qdisc. CLASSFUL QDISCS. PARAMETERS The following parameters are widely used in TC. For other parameters, see the man pages for individual qdiscs. RATES Bandwidths or rates. These parameters accept a floating point number, possibly followed by a unit (both SI and IEC units supported).. -n, -net, -netns <NETNS> switches tc to the specified network namespace NETNS. Actually it just simplifies executing of: ip netns exec NETNS tc [ OPTIONS ] OBJECT { COMMAND | help } to tc -n[etns] NETNS [ OPTIONS ] OBJECT { COMMAND | help } -cf, -conf <FILENAME> specifies path to the config file. This option is used in conjuction with other options (e.g. -nm). FORMAT The show command has additional formatting options: -s, -stats, -statistics output more statistics about packet usage. -d, -details output more detailed information about rates and cell sizes. -r, -raw output raw hex values for handles. -p, -pretty decode filter offset and mask values to equivalent filter commands based on TCP/IP. -iec print rates in IEC units (ie. 1K = 1024). -g, -graph shows classes as ASCII graph. Prints generic stats info under each class if -s option was specified. Classes can be filtered only by dev option. -red(8), tc-route(8), tc-sfb(8), tc-sfq(8), tc-stab(8), tc-tbf(8), tc-tcindex(8), tc-u32(8), User documentation at but please direct bugreports and patches to: <netdev@vger.kernel.org> AUTHOR Manpage maintained by bert hubert (ahu@ds9a.nl)
https://manpages.ubuntu.com/manpages/xenial/en/man8/tc.8.html
CC-MAIN-2022-21
refinedweb
442
54.73
Tuple space Tuple space is a programming language created in 1988.. Read more on Wikipedia... - Tuple space ranks in the top 25% of languages - the Tuple space wikipedia page - Tuple space first appeared in 1988 - See also: linda, java, lisp, lua, prolog, python, ruby, smalltalk, tcl, isbn - Have a question about Tuple space not answered here? Email me and let me know how I can help. Example code from Wikipedia: // Client public class Client { public static void main(String[] args) throws Exception { JavaSpace space = (JavaSpace) space(); SpaceEntry e = space.take(new SpaceEntry(), null, Long.MAX_VALUE); System.out.println(e.service()); space.write(e, null, Lease.FOREVER); } } Last updated August 9th, 2020 Edit Tuple space on GitHub
https://codelani.com/languages/tuple-space.html
CC-MAIN-2020-40
refinedweb
116
66.64
This made me feel nostalgic and I have decided to pack all the shellcodes that I have written over the years into a tarball (linux_x86_shellcodes.tar.gz) and re-publish it. The feedback I got was great, and it inspired me to go and write a new shellcode. So, I did. I have written a new linux x86 execve() shellcode that executes the Python interpreter with a Python program passed in as string. Why calling Python and not /bin/sh you ask? Because Python script makes it easier to customize and/or automate a penetration testing (especially post exploitation). Python is a cross-platform programming language with a decent standard library, and is known to run on almost any operating system or hardware platform. A Python script can query the OS, CPU, HOSTNAME, and even IP address, and based on this information to call different functions and/or use different parameters. Shell script, as good as it may be, is still dependent on various binaries to be installed beforehand to be able to run successfully. Also, shell scripts are not cross-platform. Let's have a look on how it works. Here's the shellcode source code: .section .text .global _start _start: push $0xb pop %eax cdq push %edx push $0x20292763 push $0x65786527 push $0x2c273e67 push $0x6e697274 push $0x733c272c push $0x29286461 push $0x65722e29 push $0x2779702e push $0x646c726f push $0x776f6c6c push $0x65682f32 push $0x34323834 push $0x3639322f push $0x752f6d6f push $0x632e786f push $0x62706f72 push $0x642e6c64 push $0x2f2f3a70 push $0x74746827 push $0x286e6570 push $0x6f6c7275 push $0x2e326269 push $0x6c6c7275 push $0x28656c69 push $0x706d6f63 push $0x20636578 push $0x653b3262 push $0x696c6c72 push $0x75207472 push $0x6f706d69 mov %esp, %esi push %edx pushw $0x632d mov %esp, %ecx push %edx push $0x6e6f6874 push $0x79702f6e push $0x69622f72 push $0x73752f2f mov %esp,%ebx push %edx push %esi push %ecx push %ebx mov %esp,%ecx int $0x80What the shellcode does is call execve() syscall with '/usr/bin/python' as the filename argument, and '-c' and a one-line Python program as the argv array argument. There's no need for a cleanup code, as execve() does not return on success, and the text, data, bss, and stack of the calling process are overwritten by that of the program loaded. Here is the Python one-line program source code: import urllib2 ; exec compile(urllib2.urlopen('').read(), '<string>', 'exec')What the Python program does is import the urllib2 library and use it to retrieve a Python script from a remote Web server, and then compiles and executes it on the fly. More to it, The retrieved Python script remains in memory the whole time. The Python program does all of the above without writing any data to the hard drive. Here is the retrieved Python script (i.e. helloworld.py) source code: print "Hello, world"Depending on the nature of the retrieved Python script, it might be enough to just use eval() instead of exec and compile() combination in the one-line Python program. In this case, print is a statement in Python 2.x and as such it can not be evaluated using eval(). in Python 3, print() is a function and can be evaluated using eval(). If in doubt, always use the exec and compile() combination. To compile the shellcode source and test it: $ as -o python-execve-urllib2-exec.o python-execve-urllib2-exec.s $ ld -o python-execve-urllib2-exec python-execve-urllib2-exec.o $ ./python-execve-urllib2-execThe output should be: Hello, worldHere is the shellcode represented as a hex string within a C program: char shellcode[] = "\x6a\x0b" // push $0xb "\x58" // pop %eax "\x99" // cdq "\x52" // push %edx "\x68\x63\x27\x29\x20" // push $0x20292763 "\x68\x27\x65\x78\x65" // push $0x65786527 "\x68\x67\x3e\x27\x2c" // push $0x2c273e67 "\x68\x74\x72\x69\x6e" // push $0x6e697274 "\x68\x2c\x27\x3c\x73" // push $0x733c272c "\x68\x61\x64\x28\x29" // push $0x29286461 "\x68\x29\x2e\x72\x65" // push $0x65722e29 "\x68\x2e\x70\x79\x27" // push $0x2779702e "\x68\x6f\x72\x6c\x64" // push $0x646c726f "\x68\x6c\x6c\x6f\x77" // push $0x776f6c6c "\x68\x32\x2f\x68\x65" // push $0x65682f32 "\x68\x34\x38\x32\x34" // push $0x34323834 "\x68\x2f\x32\x39\x36" // push $0x3639322f "\x68\x6f\x6d\x2f\x75" // push $0x752f6d6f "\x68\x6f\x78\x2e\x63" // push $0x632e786f "\x68\x72\x6f\x70\x62" // push $0x62706f72 "\x68\x64\x6c\x2e\x64" // push $0x642e6c64 "\x68\x70\x3a\x2f\x2f" // push $0x2f2f3a70 "\x68\x27\x68\x74\x74" // push $0x74746827 "\x68\x70\x65\x6e\x28" // push $0x286e6570 "\x68\x75\x72\x6c\x6f" // push $0x6f6c7275 "\x68\x69\x62\x32\x2e" // push $0x696c6c72 "\x68\x75\x72\x6c\x6c" // push $0x6c6c7275 "\x68\x69\x6c\x65\x28" // push $0x28656c69 "\x68\x63\x6f\x6d\x70" // push $0x706d6f63 "\x68\x78\x65\x63\x20" // push $0x20636578 "\x68\x62\x32\x3b\x65" // push $0x653b3262 "\x68\x72\x6c\x6c\x69" // push $0x696c6c72 "\x68\x72\x74\x20\x75" // push $0x75207472 "\x68\x69\x6d\x70\x6f" // push $0x6f706d69 "\x89\xe6" // mov %esp,%esi "\x52" // push %edx "\x66\x68\x2d\x63" // pushw $0x632d "\x89\xe1" // mov %esp,%ecx "\x52" // push %edx "\x68\x74\x68\x6f\x6e" // push $0x6e6f6874 "\x68\x6e\x2f\x70\x79" // push $0x79702f6e "\x68\x72\x2f\x62\x69" // push $0x69622f72 "\x68\x2f\x2f\x75\x73" // push $0x73752f2f "\x89\xe3" // mov %esp,%ebx "\x52" // push %edx "\x56" // push %esi "\x51" // push %ecx "\x53" // push %ebx "\x89\xe1" // mov %esp, %ecx "\xcd\x80"; // int $0x80 int main(int argc, char **argv) { int *ret; ret = (int *)&ret + 2; (*ret) = (int) shellcode; }Again, to run and test it is as easy as: $ gcc -o python-execve-urllib2-exec python-execve-urllib2-exec.c $ ./python-execve-urllib2-execThe output should be the same: Hello, worldNow, I have also decided to open a GitHub repository to host the collection of shellcodes that I have written in the past, as well as any that I may write in the future. I have committed both .S, and .C versions of this shellcode to it, as well as the rest of the shellcodes from the tarball. The repository can be found at: Feel free to fork, and if you wish, to submit a pull-request and fix bug or suggest a change.
http://blog.ikotler.org/2012/05/linuxx86-execve-python-interpreter-with.html
CC-MAIN-2014-52
refinedweb
1,016
71.07
Main class of the Oculars plug-in. More... #include <Oculars.hpp> Main class of the Oculars plug-in. Definition at line 75 of file Oculars.hpp.. This method is called with we detect that our hot key is pressed. It handles determining if we should do anything - based on a selected object. Return the value defining the order of call for the given action For example if stars.callOrder[ActionDraw] == 10 and constellation.callOrder[ActionDraw] == 11, the stars module will be drawn before the constellations. Reimplemented from StelModule. This method is needed because the MovementMgr classes handleKeys() method consumes the event. Because we want the ocular view to track, we must intercept & process ourselves. Only called while flagShowOculars or flagShowCCD == true. Reimplemented from StelModule. Handle mouse clicks. Please note that most of the interactions will be done through the GUI module. Reimplemented from StelModule. Initialize itself. If the initialization takes significant time, the progress should be displayed on the loading bar. Implements StelModule. amount must be a number. indexString must be an integer, in the range of -1:ccds.count() indexString must be an integer, in the range -1:lense.count<() indexString must be an integer, in the range of -1:oculars.count() indexString must be an integer, in the range of -1:telescopes.count() Toggles the sensor frame overlay. Toggles the sensor frame overlay (overloaded for blind switching). Toggles the Telrad sight overlay. Toggles the Telrad sight overlay (overloaded for blind switching). Update the module with respect to the time. Implements StelModule. Definition at line 108 of file Oculars.hpp. Update the ocular, telescope and sensor lists after the removal of a member. Necessary because of the way model/view management in the OcularDialog is implemented.
http://stellarium.org/doc/0.15.1/classOculars.html
CC-MAIN-2017-04
refinedweb
287
62.34
im_similarity_area, im_similarity - apply a similarity transform to an image #include <vips/vips.h> int im_similarity_area(in, out, s, a, dx, dy, x, y, w, h) IMAGE *in, *out; double s, a, dx, dy; int x, y; int w, h; int im_similarity(in, out, s, a, dx, dy) IMAGE *in, *out; double s, a, dx, dy; im_similarity_area() applies a similarity transformation on the image held by the IMAGE descriptor in and puts the result at the location pointed by the IMAGE descriptor out. in many have any number of bands, be any size, and have any non-complex type.. The functions return 0 on success and -1 on error. As with most resamplers, im_similarity performs poorly at the edges of images. similarity(1), similarity_area(1) N. Dessipris - 13/01/1992 J.Ph. Laurent - 12/12/92 J. Cupitt - 22/02/93 13 January 1992
http://huge-man-linux.net/man3/im_similarity_area.html
CC-MAIN-2018-26
refinedweb
143
56.96
Jim Meyering wrote: > Bruno Haible wrote: >> On HP-UX 11.31 with cc: >> FAIL: rm/deep-2 (exit: 1) >> ========================= >> + : perl >> + perl -e 'my $d = "x" x 200; foreach my $i (1..52)' -e ' { mkdir ($d, >> 0700) && chdir $d or die "$!" }' >> + cd .. >> + echo n >> + rm ---presume-input-tty -r x >> rm: cannot remove x/xxxxxxxxx... ...': >> File name too long >> + fail=1 > > That name is 1207 bytes long, which must be larger than HPUX 11.31's PATH_MAX. > > remove.c's write_protected_non_symlink must be > calling euidaccess_stat with the long "full_name". > Obviously, that would fail with "File name too long". > > This is a problem on HPUX because it lacks *at-function support. > One way to work around it would be to change this: > > - if (!openat_needs_fchdir ()) > + if (1) > > But that would make rm use the fully emulated faccessat, which may actually > call fchdir, and which fails in the unusual event that save_cwd fails. > > This is all in a very deep dark corner, so my first reaction reluctance > to compromise the implementation just to accommodate systems that lack > openat and/or /proc/self/fd support. However, once my brain engaged, > I realized that using "imperfect" at-function emulation here would have > no impact. What happens when we determine this file is removable? > > We unlink it via unlinkat. > > That unlinkat function uses the very same underlying emulation code > that faccessat does, so there is no reason to limit faccessat > use to when we have adequate openat/proc support. Just use it all > of the time and remove the ugly hacks. While the hacks were ugly (and the code is now gone), I've deliberately left most of the comments as a warning to anyone who thinks openat-style functions are *not* an essential part of any system, these days. > Patch coming up... Here's the patch: It's not often I get to remove 3 files from coreutils, these days. >From a6ed806f738c06a380e418df9ef254949118dfa5 Mon Sep 17 00:00:00 2001 From: Jim Meyering <address@hidden> Date: Sun, 9 Oct 2011 10:52:52 +0200 Subject: [PATCH] rm: do not resort to stat'ing very long names even on deficient systems This change affects only uses of rm with -r but without -f and on systems lacking support for openat-like functions. * src/remove.c (write_protected_non_symlink): Call faccessat unconditionally. Thus we no longer need euidaccess_stat, which was the sole function used here to operate on a full relative file name. Remove full_name parameter and update caller. Other than the removal of the openat_needs_fchdir call, this change affects only systems that have neither *at function support nor the /proc/self/fd support required to emulate those *at functions. * lib/euidaccess-stat.h: Remove file. * lib/euidaccess-stat.c: Likewise. * m4/euidaccess-stat.m4: Likewise. * m4/prereq.m4 (gl_PREREQ): Don't require gl_EUIDACCESS_STAT. Prompted by a report from Bruno Haible that the rm/deep-2 test was failing on HP-UX 11.31. See --- lib/euidaccess-stat.c | 134 ------------------------------------------------- lib/euidaccess-stat.h | 5 -- m4/euidaccess-stat.m4 | 11 ---- m4/prereq.m4 | 3 +- src/remove.c | 35 ++----------- 5 files changed, 5 insertions(+), 183 deletions(-) delete mode 100644 lib/euidaccess-stat.c delete mode 100644 lib/euidaccess-stat.h delete mode 100644 m4/euidaccess-stat.m4 diff --git a/lib/euidaccess-stat.c b/lib/euidaccess-stat.c deleted file mode 100644 index 46c04b8..0000000 --- a/lib/euidaccess-stat.c +++ /dev/null @@ -1,134 +0,0 @@ -/* euidaccess-stat -- check if effective user id can access lstat'd file - This function is probably useful only for choosing whether to issue - a prompt in an implementation of POSIX-specified rm. - - Copyright (C) 2005-2006, 2009"); - return 0; -} -#endif diff --git a/lib/euidaccess-stat.h b/lib/euidaccess-stat.h deleted file mode 100644 index de24961..0000000 --- a/lib/euidaccess-stat.h +++ /dev/null @@ -1,5 +0,0 @@ -#include <sys/types.h> -#include <sys/stat.h> -#include <stdbool.h> - -bool euidaccess_stat (struct stat const *st, int mode); diff --git a/m4/euidaccess-stat.m4 b/m4/euidaccess-stat.m4 deleted file mode 100644 index 1f97172..0000000 --- a/m4/euidaccess-stat.m4 +++ /dev/null @@ -1,11 +0,0 @@ -# serial 1 -dnl Copyright (C) 2005, 2009-2011 Free Software Foundation, Inc. -dnl This file is free software; the Free Software Foundation -dnl gives unlimited permission to copy and/or distribute it, -dnl with or without modifications, as long as this notice is preserved. - -AC_DEFUN([gl_EUIDACCESS_STAT], -[ - AC_LIBSOURCES([euidaccess-stat.c, euidaccess-stat.h]) - AC_LIBOBJ([euidaccess-stat]) -]) diff --git a/m4/prereq.m4 b/m4/prereq.m4 index c3feecb..574ac82 100644 --- a/m4/prereq.m4 +++ b/m4/prereq.m4 @@ -1,4 +1,4 @@ -#serial 77 +#serial 78 dnl We use gl_ for non Autoconf macros. m4_pattern_forbid([^gl_[ABCDEFGHIJKLMNOPQRSTUVXYZ]])dnl @@ -36,7 +36,6 @@ AC_DEFUN([gl_PREREQ], # Invoke macros of modules that may migrate into gnulib. # There's no need to list gnulib modules here, since gnulib-tool # handles that; see ../bootstrap.conf. - AC_REQUIRE([gl_EUIDACCESS_STAT]) AC_REQUIRE([gl_FD_REOPEN]) AC_REQUIRE([gl_FUNC_XATTR]) AC_REQUIRE([gl_FUNC_XFTS]) diff --git a/src/remove.c b/src/remove.c index 3814232..c650543 100644 --- a/src/remove.c +++ b/src/remove.c @@ -23,7 +23,6 @@ #include "system.h" #include "error.h" -#include "euidaccess-stat.h" #include "file-type.h" #include "ignore-value.h" #include "quote.h" @@ -106,13 +105,10 @@ cache_stat_ok (struct stat *st) /* Return 1 if FILE is an unwritable non-symlink, 0 if it is writable or some other type of file, -1 and set errno if there is some problem in determining the answer. - Use FULL_NAME only if necessary. - Set *BUF to the file status. - This is to avoid calling euidaccess when FILE is a symlink. */ + Set *BUF to the file status. */ static int write_protected_non_symlink (int fd_cwd, char const *file, - char const *full_name, struct stat *buf) { if (can_write_any_file ()) @@ -170,32 +166,10 @@ write_protected_non_symlink (int fd_cwd, mess up with long file names). */ { - /* This implements #1: on decent systems, either faccessat is - native or /proc/self/fd allows us to skip a chdir. */ - if (!openat_needs_fchdir ()) - { - if (faccessat (fd_cwd, file, W_OK, AT_EACCESS) == 0) - return 0; - - return errno == EACCES ? 1 : -1; - } - - /* This implements #5: */ - size_t file_name_len = strlen (full_name); - - if (MIN (PATH_MAX, 8192) <= file_name_len) - return ! euidaccess_stat (buf, W_OK); - if (euidaccess (full_name, W_OK) == 0) + if (faccessat (fd_cwd, file, W_OK, AT_EACCESS) == 0) return 0; - if (errno == EACCES) - { - errno = 0; - return 1; - } - /* Perhaps some other process has removed the file, or perhaps this - is a buggy NFS client. */ - return -1; + return errno == EACCES ? 1 : -1; } } @@ -244,8 +218,7 @@ prompt (FTS const *fts, FTSENT const *ent, bool is_dir, && ((x->interactive == RMI_ALWAYS) || x->stdin_tty) && dirent_type != DT_LNK) { - write_protected = write_protected_non_symlink (fd_cwd, filename, - full_name, sbuf); + write_protected = write_protected_non_symlink (fd_cwd, filename, sbuf); wp_errno = errno; } -- 1.7.7.rc0.362.g5a14
http://lists.gnu.org/archive/html/coreutils/2011-10/msg00036.html
CC-MAIN-2016-07
refinedweb
1,102
59.19
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. I have a Scripted custom field called FixVersion_Changed The value of the field is updated as it captures value for number of changes made to the FixVersions field. I would like to decrement the value of this field under certain conditions using a Scripted Post Function. I am not a programmer and looking for a code that I can use to modify the value of this field. You need to add the post-function to a workflow step. You could add a code something like this to decrement the value: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.issue.ModifiedValue import com.atlassian.jira.issue.util.DefaultIssueChangeHolder // Replace 12345 by the ID of your customfield def customField = ComponentAccessor.customFieldManager.getCustomFieldObject(12345) def fieldValue = issue.getCustomFieldValue(customField) def newValue // Add your conditions here if (true){ newValue = fieldValue -1 } def holder = new DefaultIssueChangeHolder() customField.updateValue(null, issue, new ModifiedValue(fieldValue, newValue), holder) You can modify the code, adjust it to your conditions to apply decrement. Getting the below error... Also if I specify the fieldID, I get few more errors.. Again I am not a Java or Groovy programmer to decipher Use spaces in your code. "fieldvalue-1" is a single object that you have not defined, not a calculation. Try "fieldvalue - 1" Thank you Nic and Tayyab. Alexey's solution below worked for me. I do have a use case for the code mentioned above as well. Will test it soon. if you use scriptrunner scripted field then you will not be able to set a value for the field in a post-function. Scripted fields work differently. The value for the scripted field is calculated on showing the field on the screen. That is why you have to change the code for the field. I guess there is an increment somewhere in the code of the scripted filed. You need to add a condition which will skip the increment on your condition. If you produce the current code of the scripted filed I can give you more details. Hi Alex, That makes sense.. The Scripted field name is FixVersion_Changed. Below is the code of the scripted field. *** import com.atlassian.jira.component.ComponentAccessor ComponentAccessor.changeHistoryManager?.getChangeItemsForField(issue, "Fix Version").size() as int *** I would like to not increment the value of this field if I transition the workflow to Backlog and Deferred. This is because I clear the FixVersion field when I transition to Backlog or Deferred as a post-function. The issue could come back in the funnel at a later date and a new FixVersion could be assigned. you need to replace ComponentAccessor.changeHistoryManager?.getChangeItemsForField(issue, "Fix Version").size() as int with something like this ComponentAccessor.changeHistoryManager?.getChangeItemsForField(issue, "Fix Version").findAll{ !it.getToString().isEmpty() }.size() as int Unfortunately I can not check the code because I have no access to Jira right now. But the idea is that you have to count all changes of Fix Version field except when it was set to Null. I am not sure what getToString returns if value were set to null but maybe it will work. Hello, Use this code. It worked for me. import com.atlassian.jira.component.ComponentAccessor ComponentAccessor.changeHistoryManager?.getChangeItemsForField(issue, "Fix Version").findAll{ it.getToString() != null}.size() Hello, What plugin do you use for the scripted field? Scriptrunner? If you are not sure then kindly produce the current code of the mentioned scripted.
https://community.atlassian.com/t5/Jira-questions/Decrement-Custom-Field-Value/qaq-p/662273
CC-MAIN-2018-30
refinedweb
591
51.14
phx_gen_auth and OAuth José Valim posted this message on Twitter. So here is my experience integrating phx_gen_auth with OAuth for a Phoenix LiveView app. Some Background. Started with Google and planning to extend to other providers later on. But I wanted to keep the existing DB backed sessions and many other goodies generated by phx_gen_auth. So I used ueberauth on top. The Demo Example For my game, I ended up stripping a lot of code that phx_gen_auth initially generated. So for the current article, I created a new phoenix app called (very original) Demo. For the demo app, I tried to stick as much as possible to the defaults, for both phx_gen_auth and ueberauth. Both packages have very good tutorials, so my implementation is following along with those. I will not go into details about phx_gen_auth. You can find all you need on the GitHub page and hex.pm. I just want to say that with a single mix command you will get lots of tools and 100+ tests suite out of the box. It is your choice afterward how you want to customize it. Goal What the Demo app is about: - authentication for a LiveView page with Oauth on top of phx_gen_auth - bonus: store the current_user in the LiveView state Prerequisites - a new (or existing) phoenix app generated with the --liveoption - a Google app you can use for OAuth. You can create a new one here:. You need to configure it to call your dev server, to make the demo work locally Implementation Deps and Config Install the dependencies: phx_gen_auth, ueberauth, ueberauth_google. # mix.exs {:phx_gen_auth, "~> 0.6", only: [:dev], runtime: false}, {:ueberauth, "~> 0.6"}, {:ueberauth_google, "~> 0.10"}# config.exs config :ueberauth, Ueberauth, providers: [ google: {Ueberauth.Strategy.Google, []} ] Next, we need to store the Google app credentials. For the dev environment, we can create a dev.secret.exs and import it in dev.exs. Do not forget to add the secret config to the .gitignore ! # dev.secret.exsconfig :ueberauth, Ueberauth.Strategy.Google.OAuth, client_id: "MY CLIENT ID", client_secret: "MY CLIENT SECRET" Generate Auth Files mix phx.gen.auth Accounts User users. That’s it! Everything is set up and working. At this point, you can run the tests and see them passing. Make the phoenix app home page accessible only to authenticated users: # router.ex scope "/", DemoWeb do pipe_through [:browser, :require_authenticated_user] live "/", PageLive, :index end Fetch or create user For the OAuth case, there is no previous registration step. The user may, or may not exist in the database at the time of sign-in. # accounts.ex def fetch_or_create_user(attrs) do case get_user_by_email(attrs.email) do %User{} = user -> {:ok, user} _ -> %User{} |> User.registration_changeset(attrs) |> Repo.insert() end end OAuth Controller and routes You will find references about the auth controller and default routes in the ueberauth docs. The callback route is called once the user authenticates with Google. defmodule DemoWeb.UserOauthController do use DemoWeb, :controller alias Demo.Accounts alias DemoWeb.UserAuth plug Ueberauth @rand_pass_length 32 def callback(%{assigns: %{ueberauth_auth: %{info: user_info}}} = conn, %{"provider" => "google"}) do user_params = %{email: user_info.email, password: random_password()} case Accounts.fetch_or_create_user(user_params) do {:ok, user} -> UserAuth.log_in_user(conn, user) _ -> conn |> put_flash(:error, "Authentication failed") |> redirect(to: "/") end end def callback(conn, _params) do conn |> put_flash(:error, "Authentication failed") |> redirect(to: "/") end defp random_password do :crypto.strong_rand_bytes(@rand_pass_length) |> Base.encode64() end end The only thing I want to mention here is the autogenerated password. It may feel like a workaround. But as I said at the beginning of the post, I would like to stick as much as possible to the default implementation. Basically, a password is mandatory to create a user. If you don't need a password at all, you can remove the field and the logic around it. I will come back to this subject in the “Caveats” section below. # router.ex pipe_through [:browser, :redirect_if_user_is_authenticated] get "/auth/:provider", UserOauthController, :request get "/auth/:provider/callback", UserOauthController, :callback Log-in Template Add a login link like: # user_session/new.html.eex <%= link "LOGIN WITH GOOGLE", to: Routes.user_oauth_path(@conn, :request, "google") %> Bonus: Current User in the LiveView The UserAuth.log_in_user/3 called during authentication, puts the user_token in the session. We can retrieve the session info in the live_view mount/3 callback, and use the user_token to find the user. def mount(_params, %{"user_token" => user_token}, socket) do user = Accounts.get_user_by_session_token(user_token) {:ok, assign(socket, current_user: user)} end Now we have the user in the LiveView state. Caveats - this basic example allows the user to log-in with both a password or Google Oauth. The order is not even important. The user may register, then use Google to authenticate. Or the other way: they may sign in with Google, then request a password reset and use the email and the new password to log in. Some websites do not allow this flow. They let the user sign in only with the initial authentication method. - this basic example can be of course further customized. You can persist the authentication method and the provider. You can then restrict the authentication flow based on that info. - you can remove the logic you do not use. Eg. the password field and all the templates and controllers related to password reset. They are not needed if you allow the user to authenticate only with OAuth Resources - the demo GitHub repo: - the commit that implements OAuth on top of phx_gen_auth: Conclusions We can create a simple OAuth flow just by following the guides of phx_gen_auth and ueberauth, plus a few lines of code phx_gen_auth provides the infrastructure and lots of helpers to customize your own authentication solution. Even if you extend it with OAuth, you can reuse all the functions related to user and session management. As always, any comments and feedback are more than welcome.
https://iacobson.medium.com/phx-gen-auth-and-oauth-for-a-phoenix-liveview-app-a19a27e6befa?readmore=1&source=user_profile---------1----------------------------
CC-MAIN-2021-39
refinedweb
963
58.28
Just to say that i've downloaded the latest version of thinkfinger for testing it on my Dell Latitute D420 and it works. Greets to all the people that have written this driver. I'll modify some apps to make them work with fingers and similar human extensions ;) About the source. I had to modify one file to build the lib on Gentoo GNU/Linux. This is the patch: ------------------------------ diff -ru thinkfinger-0.2/pam/pam_thinkfinger.c thinkfinger-0.2.pancake/pam/pam_thinkfinger.c --- thinkfinger-0.2/pam/pam_thinkfinger.c 2007-01-12 11:35:49.000000000 +0000 +++ thinkfinger-0.2.pancake/pam/pam_thinkfinger.c 2007-01-24 20:51:08.000000000 +0000 @@ -30,7 +30,7 @@ #include <unistd.h> #include <pthread.h> #include <libthinkfinger.h> -#include <security/pam_ext.h> +//#include <security/pam_ext.h> #include <security/pam_modules.h> #define PAM_SM_AUTH ------------------------------- That is. pam_ext.h is not checked by configure and is not used by the pam_thinkfinger program. So it could be removed. --pancake ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash
http://article.gmane.org/gmane.linux.drivers.thinkfinger/40
crawl-002
refinedweb
198
55.5
forkOn is O(n^2) SummarySummary As I queue up additional forkOn tasks, the performance seems to become O(n^2) in the number of times I call forkOn. Steps to reproduceSteps to reproduce GHC 8.10.1 on Windows. Given the program: import System.Time.Extra import Control.Concurrent import Control.Monad import Data.IORef import System.Environment main :: IO () main = do [x] <- getArgs let n = read x :: Int bar <- newEmptyMVar count <- newIORef (0 :: Int) (d, _) <- duration $ do replicateM_ n $ do forkOn 0 $ do v <- atomicModifyIORef' count $ \old -> (old + 1, old + 1) when (v == n) $ putMVar bar () takeMVar bar putStrLn $ showDuration d This program uses the extra library for the duration/ showDuration functions, but those can be replaced by any timing functions you wish, or timing the binary itself. Compile that program with ghc --make -O2 Main -threaded and run with main 10000 +RTS -N4. Vary the value 10000 from 10K to 200K. Observe that the time taken by the program increases quadratically (although with additional variation at the higher numbers). Given runs in 10K steps over that interval, averaging three runs, my times and a graph were: Expected behaviorExpected behavior I'd expect it to be O(n) or O(n log n). EnvironmentEnvironment - GHC version used: 8.10.1. Optional: - Operating System: Windows - System Architecture: 64bit
https://gitlab.haskell.org/ghc/ghc/-/issues/18221
CC-MAIN-2020-34
refinedweb
220
57.77
In the first two chapters of Book VI, you find out how to use two basic Swing user-interface components-labels and buttons-and how to handle events generated when the user clicks one of those buttons. If all you ever want to write are programs that display text when the user clicks a button, you can put the book down now. But if you want to write programs that actually do something worthwhile, you need to use other Swing components. In this chapter, you find out how to use components that get information from the user. First, I cover two components that get text input from the user: text fields, which get a line of text, and text areas, which get multiple lines. Then I move on to two components that get either/or information from the user: radio buttons and check boxes. Along the way, I tell you about some features that let you decorate these controls to make them more functional. Specifically I look at scroll bars (which are commonly used with text areas) and borders (which are used with radio buttons and check boxes). A text field is a box that the user can type text in. You create text fields by using the JTextField class. Table 3-1 shows some of the more interesting and useful constructors and methods of this class. Table 3-1: Handy JTextField Constructors and Methods Open table as spreadsheet Open table as spreadsheet measurement that's roughly equal to the width of one character in the font that the text field uses. You have to experiment a bit to get the text fields the right size. The usual way to work with text fields is to create them in the frame constructor, and then retrieve text entered by the user in the actionPerformed method of an action listener attached to one of the frame's buttons using code like this: String lastName = textLastName.getText(); Here the value entered by the user into the textLastName text field is assigned to the String variable lastName. Figure 3-1 shows the operation of a simple program that uses a text field to ask for the user's name. If the user enters a name, the program uses JOptionPane to say good morning to the user by displaying the middle message box shown in Figure 3-1. But if the user clicks the button without entering anything, the program displays the second JOptionPane message shown at the bottom. Figure 3-1: The Namer application in action. The code for this program is shown in Listing 3-1. Listing 3-1: Saying Good Morning with a Text Field import javax.swing.*; import java.awt.event.*; public class Namer extends JFrame → 4 { public static void main(String [] args) { new Namer(); } private JButton buttonOK; private JTextField textName; → 12 public Namer() { this.setSize(325,100); this.setTitle("Who Are You?"); this.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE); ButtonListener bl = new ButtonListener(); JPanel panel1 = new JPanel(); panel1.add(new JLabel("Enter your name: ")); → 25 textName = new JTextField(15); → 27 panel1.add(textName); buttonOK = new JButton("OK"); buttonOK.addActionListener(bl); panel1.add(buttonOK); this.add(panel1); this.setVisible(true); } private class ButtonListener implements ActionListener { public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { String name = textName.getText(); → 46 if (name.length() == 0) → 47 { JOptionPane.showMessageDialog( Namer.this, "You didn't "Moron", JOptionPane.INFORMATION_MESSAGE); } else { JOptionPane.showMessageDialog( Namer.this, "Good morning " + name, "Salutations", JOptionPane.INFORMATION_MESSAGE); } textName.requestFocus(); → 63 } } } } This program isn't very complicated, so the following paragraphs just hit the highlights: Open table as spreadsheet You need to take special care if you're using a text field to get numeric data from the user. The getText method returns a string value. You can pass this value to one of the parse methods of the wrapper classes for the primitive numeric types. For example, to convert the value entered into a text box to an int, you use the parseInt method: int count = Integer.parseInt(textCount.getText()); Here the result of the getText method is used as the parameter to the parseInt method. Table 3-2 lists the parse methods for the various wrapper classes. Note that each of these methods throws NumberFormatException if the string can't be converted. As a result, you need to call the parseInt method in a try/catch block to catch this exception. Then you can call this method whenever you need to check to see if a text field has a valid integer. For example, here's the actionPerformed method for a program that gets the value entered in a textCount text field and displays it in a JOptionPane message box if the value entered is a valid integer: public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { if (isInt(textCount, "You must enter an integer.")) { JOptionPane.showMessageDialog(Number.this, "You entered " + Integer.parseInt(textCount.getText()), "Your Number", JOptionPane.INFORMATION_MESSAGE); } textCount.requestFocus(); } } Here the isInt method is called to make sure the text entered by the user can be converted to an int. If so, the text is converted to an int and displayed in a message box. (In this example, the name of the outer class is Number, which is why the first parameter of the showMessageDialog method specifies Number.this.) Of course, a method can return only one value. The only way to coax a method into returning two values is to return an object that contains both of the values. And to do that, you have to create a class that defines the object. Here's an example of a class you could use as the return value of a method that validates integers: public class IntValidationResult { public boolean isValid; public int value; } And here's a class that provides a static method named isInt that validates integer data and returns an IntValidationResult object: public class Validation { public static IntValidationResult isInt( JTextField f, String msg) { IntValidationResult result = new IntValidationResult(); try { result.value = Integer.parseInt(f.getText()); result.isValid = true; return result; } catch (NumberFormatException e) { JOptionPane.showMessageDialog(f, "Entry Error", msg, JOptionPane.ERROR_MESSAGE); f.requestFocus(); result.isValid = false; result.value = 0; return result; } } } Here's an actionPerformed method that uses the isInt method of this class to validate the textCount field: public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { IntValidationResult ir; ir = Validation.isInt(textCount, "You must enter an integer."); if (ir.isValid) { JOptionPane.showMessageDialog(Number2.this, "You entered " + ir.value, "Your Number", JOptionPane.INFORMATION_MESSAGE); } textCount.requestFocus(); } } A text area is similar to a text field, but lets the user enter more than one line of text. If the user enters more text into the text area than can be displayed at once, the text area can use a scroll bar to let the user scroll to see the entire text. Figure 3-2 shows a text area in action. Figure 3-2: A frame that uses a text area. To create a text area like the one shown in Figure 3-2, you must actually use two classes. First, you use the JTextArea class to create the text area. But unfortunately, text areas by themselves don't have scroll bars. So you have to add the text area to a second component called a scroll pane, created by the JScrollPane class. Then you add the scroll pane-not the text area-to a panel so it can be displayed. Creating a text area isn't as hard as it sounds. Here's the code I used to create the text area shown in Figure 3-2, which I then added to a panel: textNovel = new JTextArea(10, 20); JScrollPane scroll = new JScrollPane(textNovel, JScrollPane.VERTICAL_SCROLLBAR_ALWAYS, JScrollPane.HORIZONTAL_SCROLLBAR_NEVER); panel1.add(scroll); Here the first statement creates a text area, giving it an initial size of 10 rows and 20 columns. Then the second statement creates a scroll pane. Notice that the text area object is passed as a parameter to the constructor for the JScrollPane, along with constants that indicate whether the scroll pane should include vertical or horizontal scroll bars (or both). Finally, the third statement adds the scroll pane to the panel named panel1. The following sections describe the constructors and methods of the JTextArea and JScrollPane classes in more detail. Table 3-3 lists the most popular constructors and methods of the JTextArea class, which you use to create text areas. In most cases, you use the second constructor, which lets you set the number of rows and columns to display. The rows parameter governs the height of the text area, while the cols parameter sets the width. Table 3-3: Clever JTextArea Constructors and Methods Open table as spreadsheet Open table as spreadsheet To retrieve the text that the user enters into a text area, you use the getText method. For example, here's an actionPerformed method from an action listener that retrieves text from a text area: public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { String text = textNovel.getText(); if (text.contains("All work and no play")) JOptionPane.showMessageDialog(textNovel, "Can't you see I'm working?", "Going Crazy", JOptionPane.ERROR_MESSAGE); } } Here a message box is displayed if the text contains the string All work and no play. Notice that in addition to the getText method, the JTextArea class has methods that let you add text to the end of the text area's current value (append), insert text in the middle of the value (insert), and replace text (replace). You use these methods to edit the value of the text area. Text areas aren't very useful without scroll bars. To create a text area with a scroll bar, you use the JScrollPane class, whose constructors and fields are listed in Table 3-4. Note that this table doesn't show any methods for the JScrollPane class. The JScrollPane class does have methods (plenty of them, in fact). But none of them are particularly useful for ordinary programming, so I didn't include any of them in the table. Table 3-4: Essential JScrollPane Constructors and Fields Open table as spreadsheet Open table as spreadsheet The usual way to create a scroll pane is to use the second constructor. You use the first parameter of this constructor to specify the component you want to add scroll bars to. For example, to add scroll bars to a textNovel text area, you specify textNovel as the first parameter. The second parameter tells the scroll pane whether or not to create a vertical scroll bar. The value you specify for this parameter should be one of the first three fields listed in Table 3-4: The third parameter uses the three HORIZONTAL_SCROLLBAR constants to indicate whether the scroll pane includes a horizontal scroll bar always, never, or only when necessary. Thus the following code adds scroll bars to a text area. The vertical scroll bar is always shown, but the horizontal scroll bar is shown only when needed: JScrollPane scroll = new JScrollPane(textNovel, JScrollPane.VERTICAL_SCROLLBAR_ALWAYS, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED); A check box is a control that the user can click to either check or uncheck. Check boxes are usually used to let the user specify Yes or No to an option. Figure 3-3 shows a frame with three check boxes. Figure 3-3: A frame with three check boxes. To create a check box, you use the JCheckBox class. Its favorite constructors and methods are shown in Table 3-5. Table 3-5: Notable JCheckBox Constructors and Methods Open table as spreadsheet Open table as spreadsheet As with any Swing component, if you want to refer to the component in both the frame class constructor and a listener, you have to declare class variables to refer to the check box components, like this: JCheckBox pepperoni, mushrooms, anchovies; Then you can use statements like these in the frame constructor to create the check boxes and add them to a panel (in this case, panel1): pepperoni = new JCheckBox("Pepperoni"); panel1.add(pepperoni); mushrooms = new JCheckBox("Mushrooms"); panel1.add(mushrooms); anchovies = new JCheckBox("Anchovies"); panel1.add(anchovies); Notice that I didn't specify the initial state of these check boxes in the constructor. As a result, they're initially unchecked. If you want to create a check box that is initially checked, call the constructor like this: Pepperoni = new JCheckBox("Pepperoni", true); In an event listener, you can test the state of a check box by using the isSelected method, and you can set the state of a check box by calling its setSelected method. For example, here's an actionPerformed method that displays a message box and unchecks all three check boxes when the user clicks the OK button: public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { String msg = ""; if (pepperoni.isSelected()) msg += "Pepperoni "; if (mushrooms.isSelected()) msg += "Mushrooms "; if (anchovies.isSelected()) msg += "Anchovies "; if (msg.equals("")) msg = "You didn't order any toppings."; else msg = "You ordered these toppings: " + msg; JOptionPane.showMessageDialog(buttonOK, msg, "Your Order", JOptionPane.INFORMATION_MESSAGE); pepperoni.setSelected(false); mushrooms.setSelected(false); anchovies.setSelected(false); } } Here the name of each topping selected by the user is added to a text string. For example, if you select Pepperoni and Anchovies, the following message is displayed: You ordered these toppings: Pepperoni Anchovies Suppose your restaurant has anchovies on the menu, but you refuse to actually make pizzas with anchovies on them. Here's an actionPerformed method from an action listener that displays a message if the user tries to check the Anchovies check box and unchecks the box: public void actionPerformed(ActionEvent e) { if (e.getSource() == anchovies) { JOptionPane.showMessageDialog(anchovies, "We don't do anchovies here.", "Yuck!", JOptionPane.WARNING_MESSAGE); anchovies.setSelected(false); } } Radio buttons are similar to check boxes, but with a crucial difference: Radio buttons travel in groups, and a user can select only one radio button in each group at a time. When you click a radio button to select it, whatever radio button was previously selected is automatically deselected. Figure 3-4 shows a frame with three radio buttons. To work with radio buttons, you use two classes. First, you create the radio buttons themselves with the JRadioButton class, whose constructors and methods are shown in Table 3-6. Then you create a group for the buttons with the ButtonGroup class. You must add the radio buttons themselves to a panel (so they are displayed) and to a button group (so they're properly grouped with other buttons). Figure 3-4: A frame with three radio buttons. Table 3-6: Various JRadioButton Constructors and Methods Open table as spreadsheet Open table as spreadsheet The usual way to create a radio button is to declare a variable to refer to the button as a class variable so it can be accessed anywhere in the class. For example JRadioButton small, medium, large; Then, in the frame constructor, you call the JRadioButton constructor to create the radio button: small = new JRadioButton("Small"); You can then add the radio button to a panel in the usual way. To create a button group to group radio buttons that work together, just call the ButtonGroup class constructor: ButtonGroup group1 = new ButtonGroup(); Then call the add method of the ButtonGroup to add each radio button to the group: group1.add(small); group1.add(medium); group1.add(large); Where button groups really come in handy is when you have more than one set of radio buttons on a form. For example, suppose that in addition to choosing the size of the pizza, the user can also choose the style of crust-thin or thick. In that case, you use a total of five radio buttons and two button groups..add(small); size.add(medium); size.add(large); thin = new JRadioButton("Thin Crust"); thick = new JRadioButton("Thick Crust"); crust.add(thin); crust.add(thick); (To keep this example simple, I omitted the statements that add the radio buttons to the panel.) A border is a decorative element that visually groups components by drawing a line around them. Figure 3-5 shows a frame that shows some radio buttons and check boxes inside borders. Figure 3-5: A frame with borders. You can apply a border to any object that inherits JComponent, but the usual technique is to apply the border to a panel and add any components you want to appear within the border to the panel. To create a border, you call one of the static methods listed in Table 3-7. Each of these methods creates a border with a slightly different visual style. You then apply the Border object to a panel by calling the panel's setBorder method. The BorderFactory class is in the javax.swing package, but the Border interface that defines the resulting border objects is in javax.swing.border. Thus you need to include this import statement at the beginning of the class-in addition to importing javax.swing.*-if you plan on using borders: import javax.swing.border.*; For example, here's a snippet of code that creates a panel, creates a titled border, and applies the border to the panel: JPanel sizePanel = new JPanel(); Border b1 = BorderFactory.createTitledBorder("Size"); sizePanel.setBorder(b1); Then any components you add to sizePanel appear within this border. The last method listed in Table 3-7 needs a little explanation. It simply adds a title to a border created by any of the other created methods of the BorderFactory class. For example, you can create a raised bevel border with the title Options like this: Border b = BorderFactory.createRaisedBevelBorder(); b = BorderFactory.createTitledBorder(b, "Options"); To give you an idea of how borders work together with radio buttons and check boxes, Listing 3-2 presents the complete code for the program that created the frame that was shown in Figure 3-5. When the user clicks the OK button, this program displays a message box summarizing the user's order. For example, if the user orders a medium pizza with pepperoni and mushrooms, the following message is displayed: You ordered a medium pizza with the following toppings: Pepperoni Mushrooms If you order a pizza with no toppings, the message you get looks something like this: You ordered a medium pizza with no toppings. Listing 3-2: The Pizza Order Program import javax.swing.*; import java.awt.event.*; import javax.swing.border.*; public class Pizza extends JFrame { public static void main(String [] args) { new Pizza(); } private JButton buttonOK; → 12 private JRadioButton small, medium, large; private JCheckBox pepperoni, mushrooms, anchovies; public Pizza() { this.setSize(320,200); this.setTitle("Order Your Pizza"); this.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE); ButtonListener bl = new ButtonListener(); JPanel mainPanel = new JPanel(); → 24 JPanel sizePanel = new JPanel(); → 26 Border b1 = → 27 BorderFactory.createTitledBorder("Size"); sizePanel.setBorder(b1); → 29 ButtonGroup sizeGroup = new ButtonGroup(); → 31 small = new JRadioButton("Small"); → 33 small.setSelected(true); sizePanel.add(small); sizeGroup.add(small); medium = new JRadioButton("Medium"); → 38 sizePanel.add(medium); sizeGroup.add(medium); large = new JRadioButton("Large"); → 42 sizePanel.add(large); sizeGroup.add(large); mainPanel.add(sizePanel); → 46 JPanel topPanel = new JPanel(); → 48 Border b2 = BorderFactory.createTitledBorder( "Toppings"); topPanel.setBorder(b2); pepperoni = new JCheckBox("Pepperoni"); → 53 topPanel.add(pepperoni); mushrooms = new JCheckBox("Mushrooms"); topPanel.add(mushrooms); anchovies = new JCheckBox("Anchovies"); topPanel.add(anchovies); mainPanel.add(topPanel); → 62 buttonOK = new JButton("OK"); → 64 buttonOK.addActionListener(bl); mainPanel.add(buttonOK); this.add(mainPanel); → 68 this.setVisible(true); } private class ButtonListener implements ActionListener { public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { String tops = ""; → 79 if (pepperoni.isSelected()) tops += "Pepperoni "; if (mushrooms.isSelected()) tops += "Mushrooms "; if (anchovies.isSelected()) tops += "Anchovies "; String msg = "You ordered a "; → 87 if (small.isSelected()) msg += "small pizza with "; if (medium.isSelected()) msg += "medium pizza with "; if (large.isSelected()) msg += "large pizza with "; if (tops.equals("")) → 95 msg += "no toppings."; else msg += "the following toppings: " + tops; JOptionPane.showMessageDialog( → 100 buttonOK, msg, "Your Order", JOptionPane.INFORMATION_MESSAGE); pepperoni.setSelected(false); → 104 mushrooms.setSelected(false); anchovies.setSelected(false); small.setSelected(true); } } } } I cover everything in this program in this chapter (or in previous chapters), so I just hit the highlights here: Open table as spreadsheet As Figure 3-6 shows, a slider is a component that lets a user pick a value from a set range (say, from 0 to 50) by moving a knob. Sliders are a convenient way to get numeric input from the user when the input falls within a set range of values. Figure 3-6: A frame with a slider. To create a slider control, you use the JSlider class. Table 3-8 shows its constructors and methods. Table 3-8: Selected JSlider Constructors and Methods Open table as spreadsheet Open table as spreadsheet To create a barebones); Here shown in Figure 3); To get the value of the slider, you use the getValue method. For example, here's the actionPerformed method for the action listener attached to the OK button in Figure 3-6: public void actionPerformed(ActionEvent e) { if (e.getSource() == buttonOK) { int level = slider.getValue(); JOptionPane.showMessageDialog(slider, "Remember, this is for poster. Here's an example of a class that can be used to react to slider changes: private class SliderListener implements ChangeListener { public void stateChanged(ChangeEvent e) { if (slider.getValue() == 50) { JOptionPane.showMessageDialog(slider, "No! Not 50!", "The Machine", JOptionPane.WARNING_MESSAGE); } } } To wire an instance of this class to the slider, use this method: slider.addChangeListener(new SliderListener()); Then the stateChanged method is called whenever the user moves the knob to another position. It checks the value of the slider and displays a message box if the user has advanced the slider all the way to 50. Book I - Java Basics Book II - Programming Basics Book III - Object-Oriented Programming Book IV - Strings, Arrays, and Collections Book V - Programming Techniques Book VI - Swing Book VII - Web Programming Book VIII - Files and Databases Book IX - Fun and Games
https://flylib.com/books/en/2.706.1/getting_input_from_the_user.html
CC-MAIN-2020-05
refinedweb
3,626
55.13
Linux events. addr2line : Used to convert addresses into file names and line numbers. addresses : Formats for internet mail addresses. agetty : An alternative Linux Getty alias : Create an alias for Linux commands alsactl : Access advanced controls for ALSA soundcard driver. amidi : Perform read/write operation for ALSA RawMIDI ports. amixer : Access CLI-based mixer for ALSA soundcard driver. anacron : Used to run commands periodically. aplay : Sound recorder and player for CLI. aplaymidi : CLI utility used to play MIDI files. apm : Show Advanced Power Management (APM) hardware info on older systems. apmd : Used to handle events reported by APM BIOS drivers. apropos : Shows the list of all man pages containing a specific keyword apt : Advanced Package Tool, a package management system for Debian and derivatives. apt-get : Command-line utility to install/remove/update packages based on APT system. aptitude : Another utility to add/remove/upgrade packages based on the APT system. ar : A utility to create/modify/extract from archives. arch : Display print machine hardware name. arecord : Just like aplay, it’s a sound recorder and player for ALSA soundcard driver. arecordmidi : Record standard MIDI files. arp : Used to make changes to the system’s ARP cache as : A portable GNU assembler. aspell : An interactive spell checker utility. at : Used to schedule command execution at specified date & time, reading commands from an input file. atd : Used to execute jobs queued by the at command. atq : List a user’s pending jobs for the at command. atrm : Delete jobs queued by the at command. audiosend : Used to send an audio recording as an email. aumix : An audio mixer utility. autoconf : Generate configuration scripts from a TEMPLATE-FILE and send the output to standard output. autoheader : Create a template header for configure. automake : Creates GNU standards-compliant Makefiles from template files autoreconf : Update generated configuration files. autoscan : Generate a preliminary configure.in autoupdate : Update a configure.in file to newer autoconf. awk : Used to find and replace text in a file(s). Linux Commands – B badblocks : Search a disk partition for bad sectors. banner : Used to print characters as a poster. basename : Used to display filenames with directoy or suffix. bash : GNU Bourne-Again Shell. batch : Used to run commands entered on a standard input. bc : Access the GNU bc calculator utility. bg : Send processes to the background. biff : Notify about incoming mail and sender’s name on a system running comsat server. bind : Used to attach a name to a socket. bison : A GNU parser generator, compatible with yacc. break : Used to exit from a loop (eg: for, while, select). builtin : Used to run shell builtin commands, make custom functions for commands extending their functionality. bzcmp : Used to call the cmp program for bzip2 compressed files. bzdiff : Used to call the diff program for bzip2 compressed files. bzgrep : Used to call grep for bzip2 compressed files. bzip2 : A block-sorting file compressor used to shrink given files. bzless : Used to apply ‘less’ (show info one page at a time) to bzip2 compressed files. bzmore : Used to apply ‘more’ (an inferior version of less) to bzip2 compressed files. Linux Commands – C cal : Show calendar. cardctl : Used to control PCMCIA sockets and select configuration schemes. cardmgr : Keeps an eye on the added/removes sockets for PCMCIA devices. case : Execute a command conditionally by matching a pattern. cat : Used to concatenate files and print them on the screen. cc : GNU C and C++ compiler. cd : Used to change directory. cdda2wav : Used to rip a CD-ROM and make WAV file. cdparanoia : Record audio from CD more reliably using data-verification algorithms. cdrdao : Used to write all the content specified to a file to a CD all at once. cdrecord : Used to record data or audio compact discs. cfdisk : Show or change the disk partition table. chage : Used to change user password information. chattr : Used to change file attributes. chdir : Used to change active working directory. chfn : Used to change real user name and information. chgrp : Used to change group ownership for file. chkconfig : Manage execution of runlevel services. chmod : Change access permission for a file(s). chown : Change the owner or group for a file. chpasswd : Update password in a batch. chroot : Run a command with root privileges. chrt : Alter process attributed in real-time. chsh : Switch login shell. chvt : Change foreground virtual terminal. cksum : Perform a CRC checksum for files. clear : Used to clear the terminal window. cmp : Compare two files (byte by byte). col : Filter reverse (and half-reverse) line feeds from the input. colcrt : Filter nroff output for CRT previewing. colrm : Remove columns from the lines of a file. column : A utility that formats its input into columns. comm : Used to compare two sorted files line by line. command : Used to execute a command with arguments ignoring shell function named command. compress : Used to compress one or more file(s) and replacing the originals ones. continue : Resume the next iteration of a loop. cp : Copy contents of one file to another. cpio : Copy files from and to archives. cpp : GNU C language processor. cron : A daemon to execute scheduled commands. crond : Same work as cron. crontab : Manage crontab files (containing schedules commands) for users. csplit : Split a file into sections on the basis of context lines. ctags : Make a list of functions and macro names defined in a programming source file. cupsd : A scheduler for CUPS. curl : Used to transfer data from or to a server using supported protocols. cut : Used to remove sections from each line of a file(s). cvs : Concurrent Versions System. Used to track file versions, allow storage/retrieval of previous versions, and enables multiple users to work on the same file. Linux Commands – D date : Show system date and time. dc : Desk calculator utility. dd : Used to convert and copy a file, create disk clone, write disk headers, etc. ddrescue : Used to recover data from a crashed partition. deallocvt : Deallocates kernel memory for unused virtual consoles. debugfs : File system debugger for ext2/ext3/ext4 declare : Used to declare variables and assign attributes. depmod : Generate modules.dep and map files. devdump : Interactively displays the contents of device or file system ISO. df : Show disk usage. diff : Used to compare files line by line. diff3 : Compare three files line by line. dig : Domain Information Groper, a DNS lookup utility. dir : List the contents of a directory. dircolors : Set colors for ‘ls’ by altering the LS_COLORS environment variable. dirname : Display pathname after removing the last slash and characters thereafter. dirs : Show the list of remembered directories. disable : Restrict access to a printer. dlpsh : Interactive Desktop Link Protocol (DLP) shell for PalmOS. dmesg : Examine and control the kernel ring buffer. dnsdomainname : Show the DNS domain name of the system. dnssec-keygen : Generate encrypted Secure DNS keys for a given domain name. dnssec-makekeyset : Produce domain key set from one or more DNS security keys generated by dnssec-keygen. dnssec-signkey : Sign a secure DNS keyset with key signatures specified in the list of key-identifiers. dnssec-signzone : Sign a secure DNS zonefile with the signatures in the specified list of key-identifiers. doexec : Used to run an executable with an arbitrary argv list provided. domainname : Show or set the name of current NIS (Network Information Services) domain. dosfsck : Check and repair MS-DOS file systems. du : Show disk usage summary for a file(s). dump : Backup utility for ext2/ext3 file systems. dumpe2fs : Dump ext2/ext3/ext4 file systems. dumpkeys : Show information about the keyboard driver’s current translation tables. Linux Commands – E e2fsck : Used to check ext2/ext3/ext4 file systems. e2image : Store important ext2/ext3/ext4 filesystem metadata to a file. e2label : Show or change the label on an ext2/ext3/ext4 filesystem. echo : Send input string(s) to standard output i.e. display text on the screen. ed : GNU Ed – a line-oriented text editor. edquota : Used to edit filesystem quotas using a text editor, such as vi. egrep : Search and display text matching a pattern. eject : Eject removable media. elvtune : Used to set latency in the elevator algorithm used to schedule I/O activities for specified block devices. emacs Emacs text editor command line utility. enable : Used to enable/disable shell builtin commands. env : Run a command in a modified environment. Show/set/delete environment variables. envsubst : Substitute environment variable values in shell format strings. esd : Start the Enlightenment Sound Daemon (EsounD or esd). Enables multiple applications to access the same audio device simultaneously. esd-config Manage EsounD configuration. esdcat : Use EsounD to send audio data from a specified file. esdctl : EsounD control program. esddsp : Used to reroute non-esd audio data to esd and control all the audio using esd. esdmon : Used to copy the sound being sent to a device. Also, send it to a secondary device. esdplay : Use EsounD system to play a file. esdrec : Use EsounD to record audio to a specified file. esdsample : Sample audio using esd. etags : Used to create a list of functions and macros from a programming source file. These etags are used by emacs. For vi, use ctags. ethtool : Used to query and control network driver and hardware settings. eval : Used to evaluate multiple commands or arguments are once. ex : Interactive command exec : An interactive line-based text editor. exit : Exit from the terminal. expand : Convert tabs into spaces in a given file and show the output. expect : An extension to the Tcl script, it’s used to automate interaction with other applications based on their expected output. export : Used to set an environment variable. expr : Evaluate expressions and display them on standard output. Linux Commands – F factor : Display prime factors of specified integer numbers. false : Do nothing, unsuccessfully. Exit with a status code indicating failure. fc-cache : Make font information cache after scanning the directories. fc-list : Show the list of available fonts. fdformat : Do a low-level format on a floppy disk. fdisk : Make changes to the disk partition table. fetchmail : Fetch mail from mail servers and forward it to the local mail delivery system. fg : Used to send a job to the foreground. fgconsole : Display the number of the current virtual console. fgrep : Display lines from a file(s) that match a specified string. A variant of grep. file : Determine file type for a file. find : Do a file search in a directory hierarchy. finger : Display user data including the information listed in .plan and .project in each user’s home directory. fingerd : Provides a network interface for the finger program. flex : Generate programs that perform pattern-matching on text. fmt : Used to convert text to a specified width by filling lines and removing new lines, displaying the output. fold : Wrap input line to fit in a specified width. for : Expand words and run commands for each one in the resultant list. formail : Used to filter standard input into mailbox format. format : Used to format disks. free : Show free and used system memory. fsck : Check and repair a Linux file system ftp : File transfer protocol user interface. ftpd : FTP server process. function : Used to define function macros. fuser : Find and kill a process accessing a file. Linux Commands – G g++ : Run the g++ compiler. gawk : Used for pattern scanning and language processing. A GNU implementation of AWK language. gcc : A C and C++ compiler by GNU. gdb : A utility to debug programs and know about where it crashes. getent : Shows entries from Name Service Switch Libraries for specified keys. getkeycodes : Displays the kernel scancode-to-keycode mapping table. getopts : A utility to parse positional parameters. gpasswd : Allows an administrator to change group passwords. gpg : Enables encryption and signing services as per the OpenPGP standard. gpgsplit : Used to split an OpenPGP message into packets. gpgv : Used to verify OpenPGP signatures. gpm : It enables cut and paste functionality and a mouse server for the Linux console. gprof : Shows call graph profile data. grep : Searches input files for a given pattern and displays the relevant lines. groff : Serves as the front-end of the groff document formatting system. groffer : Displays groff files and man pages. groupadd : Used to add a new user group. groupdel : Used to remove a user group. groupmod : Used to modify a group definition. groups : Show the group(s) to which a user belongs. grpck : Verifies the integrity of group files. grpconv : Creates a gshadow file from a group or an already existing gshadow. gs : Invokes Ghostscript, and interpreter and previewer for Adobe’s PostScript and PDF languages. gunzip : A utility to compress/expand files. gzexe : Used compress executable files in place and have them automatically uncompress and run at a later stage. gzip : Same as gzip. Linux Commands – H halt : Command used to half the machine. hash : Shows the path for the commands executed in the shell. hdparm : Show/configure parameters for SATA/IDE devices. head : Shows first 10 lines from each specified file. help : Display’s help for a built-in command. hexdump : Shows specified file output in hexadecimal, octal, decimal, or ASCII format. history : Shows the command history. host : A utility to perform DNS lookups. hostid : Shows host’s numeric ID in hexadecimal format. hostname : Display/set the hostname of the system. htdigest : Manage the user authentication file used by the Apache web server. htop : An interactive process viewer for the command line. hwclock : Show or configure the system’s hardware clock. Linux Commands – I iconv : Convert text file from one encoding to another. id : Show user and group information for a specified user. if : Execute a command conditionally. ifconfig : Used to configure network interfaces. ifdown : Stops a network interface. ifup : Starts a network interface. imapd : An IMAP (Interactive Mail Access Protocol) server daemon. import : Capture an X server screen and saves it as an image. inetd : Extended internet services daemon, it starts the programs that provide internet services. info : Used to read the documentation in Info format. init : Systemd system and service manager. insmod : A program that inserts a module into the Linux kernel. install : Used to copy files to specified locations and set attributions during the install process. iostat : Shows statistics for CPU, I/O devices, partitions, network filesystems. ip : Display/manipulate routing, devices, policy, routing and tunnels. ipcrm : Used to remove System V interprocess communication (IPC) objects and associated data structures. ipcs : Show information on IPC facilities for which calling process has read access. iptables : Administration tool for IPv4 packet filtering and NAT. iptables-restore : Used to restore IP tables from data specified in the input or a file. iptables-save : Used to dump IP table contents to standard output. isodump : A utility that shows the content iso9660 images to verify the integrity of directory contents. isoinfo : A utility to perform directory like listings of iso9660 images. isosize : Show the length of an iso9660 filesystem contained in a specified file. isovfy : Verifies the integrity of an iso9660 image. ispell : A CLI-based spell-check utility. Linux Commands – J jobs : Show the list of active jobs and their status. join : For each pair of input lines, join them using a command field and display on standard output. Linux Commands – K kbd_mode : Set a keyboard mode. Without arguments, shows the current keyboard mode. kbdrate : Reset keyboard repeat rate and delay time. kill : Send a kill (termination) signal to one more processes. killall : Kills a process(es) running a specified command. killall5 : A SystemV killall command. Kills all the processes excluding the ones which it depends on. klogd : Control and prioritize the kernel messages to be displayed on the console, and log them through syslogd. kudzu : Used to detect new and enhanced hardware by comparing it with existing database. Only for RHEL and derivates. Linux Commands – L last : Shows a list of recent logins on the system by fetching data from /var/log/wtmp file. lastb : Shows the list of bad login attempts by fetching data from /var/log/btmp file. lastlog : Displays information about the most recent login of all users or a specified user. ld : The Unix linker, it combines archives and object files. It then puts them into one output file, resolving external references. ldconfig : Configure dynamic linker run-time bindings. ldd : Shows shared object dependencies. less : Displays contents of a file one page at a time. It’s advanced than more command. lesskey : Used to specify key bindings for less command. let : Used to perform integer artithmetic on shell variables. lftp : An FTP utility with extra features. lftpget : Uses lftop to retrieve HTTP, FTP, and other protocol URLs supported by lftp. link : Create links between two files. Similar to ln command. ln : Create links between files. Links can be hard (two names for the same file) or soft (a shortcut of the first file). loadkeys : Load keyboard translation tables. local : Used to create function variables. locale : Shows information about current or all locales. locate : Used to find files by their name. lockfile : Create semaphore file(s) which can be used to limit access to a file. logger : Make entries in the system log. login : Create a new session on the system. logname : Shows the login name of the current user. logout : Performs the logout operation by making changes to the utmp and wtmp files. logrotate : Used for automatic rotation, compression, removal, and mailing of system log files. look : Shows any lines in a file containing a given string in the beginning. losetup : Set up and control loop devices. lpadmin : Used to configure printer and class queues provided by CUPS (Common UNIX Printing System). lpc : Line printer control program, it provides limited control over CUPS printer and class queues. lpinfo : Shows the list of avaiable devices and drivers known to the CUPS server. lpmove : Move on or more printing jobs to a new destination. lpq : Shows current print queue status for a specified printer. lpr : Used to submit files for printing. lprint : Used to print a file. lprintd : Used to abort a print job. lprintq : List the print queue. lprm : Cancel print jobs. lpstat : Displays status information about current classes, jobs, and printers. ls : Shows the list of files in the current directory. lsattr : Shows file attributes on a Linux ext2 file system. lsblk : Lists information about all available or the specified block devices. lsmod : Show the status of modules in the Linux kernel. lsof : List open files. lspci : List all PCI devices. lsusb : List USB devices. Linux Commands – M m4 : Macro processor. mail : Utility to compose, receive, send, forward, and reply to emails. mailq : Shows to list all emails queued for delivery (sendmail queue). mailstats : Shows current mail statistics. mailto : Used to send mail with multimedia content in MIME format. make : Utility to maintain groups of programs, recompile them if needed. makedbm : Creates an NIS (Network Information Services) database map. makemap : Creates database maps used by the keyed map lookups in sendmail. man : Shows manual pages for Linux commands. manpath : Determine search path for manual pages. mattrib : Used to change MS-DOS file attribute flags. mbadblocks : Checks MD-DOS filesystems for bad blocks. mcat : Dump raw disk image. mcd : Used to change MS-DOS directory. mcopy : Used to copy MS-DOS files from or to Unix. md5sum : Used to check MD5 checksum for a file. mdel, mdeltree : Used to delete MS-DOS file. mdeltree recursively deletes MS-DOS directory and its contents. mdir : Used to display an MS-DOS directory. mdu : Used to display the amount of space occupied by an MS-DOS directory. merge : Three-way file merge. Includes all changes from file2 and file3 to file1. mesg : Allow/disallow osends to sedn write messages to your terminal. metamail For sending and showing rich text or multimedia email using MIME typing metadata. metasend : An interface for sending non-text mail. mformat : Used to add an MS-DOS filesystem to a low-level formatted floppy disk. mimencode : Translate to/from MIME multimedia mail encoding formats. minfo : Display parameters of an MS-DOS filesystem. mkdir : Used to create directories. mkdosfs : Used to create an MS-DOS filesystem under Linux. mke2fs : Used create an ext2/ext3/ext4 filesystem. mkfifo : Used to create named pipes (FIFOs) with the given names. mkfs : Used to build a Linux filesystem on a hard disk partition. mkfs.ext3 : Same as mke2fs, create an ext3 Linux filesystem. mkisofs : Used to create an ISO9660/JOLIET/HFS hybrid filesystem. mklost+found : Create a lost+found directory on a mounted ext2 filesystem. mkmanifest : Makes a list of file names and their DOS 8.3 equivalent. mknod : Create a FIFO, block (buffered) special file, character (unbuffered) special file with the specified name. mkraid : Used to setup RAID device arrays. mkswap : Set up a Linux swap area. mktemp : Create a temporary file or directory. mlabel : Make an MD-DOS volume label. mmd : Make an MS-DOS subdirectory. mmount : Mount an MS-DOS disk. mmove : Move or rename an MS-DOS file or subdirectory. mmv : Mass move and rename files. modinfo : Show information about a Linux kernel module. modprobe : Add or remove modules from the Linux kernel. more : Display content of a file page-by-page. most : Browse or page through a text file. mount : Mount a filesystem. mountd : NFS mount daemon. mpartition : Partition an MS-DOS disk. mpg123 : Command-line mp3 player. mpg321 : Similar to mpg123. mrd : Remove an MS-DOS subdirectory. mren : Rename an existing MS-DOS file. mshowfat : Show FTA clusters allocated to a file. mt : Control magnetic tape drive operation. mtools : Utilities to access MS-DOS disks. mtoolstest : Tests and displays the mtools configuration files. mtr : A network diagnostic tool. mtype : Display contents of an MS-DOS file. mv : Move/rename files or directories. mzip : Change protection mode and eject disk on Zip/Jaz drive. Linux Commands – N named : Internet domain name server. namei : Follow a pathname until a terminal point is found. nameif : Name network interfaces based on MAC addresses. nc : Netcat utility. Arbitrary TCP and UDP connections and listens. netstat : Show network information. newaliases : Rebuilds mail alias database. newgrp : Log-in to a new group. newusers : Update/create new users in batch. nfsd : Special filesystem for controlling Linux NFS server. nfsstat : List NFS statistics. nice : Run a program with modified scheduling priority. nl : Show numbered line while displaying the contents of a file. nm : List symbols from object files. nohup : Run a command immune to hangups. notify-send : A program to send desktop notifications. nslookup : Used performs DNS queries. Read this article for more info. nsupdate : Dynamic DNS update utility. Linux Commands – O objcopy : Copy and translate object files. objdump : Display information from object files. od : Dump files in octal and other formats. op : Operator access, allows system administrators to grant users access to certain root operations that require superuser privileges. open : Open a file using its default application. openvt : Start a program on a new virtual terminal (VT). Linux Commands – P passwd : Change user password. paste : Merge lines of files. Write to standard output, TAB-separated lines consisting of sqentially correspnding lines from each file. patch : Apply a patchfile (containing differences listing by diff program) to an original file. pathchk : Check if file names are valid or portable. perl : Perl 5 language interpreter. pgrep : List process IDs matching the specified criteria among all the running processes. pidof : Find process ID of a running program. ping : Send ICMP ECHO_REQUEST to network hosts. pinky : Lightweight finger. pkill : Send kill signal to processes based on name and other attributes. pmap : Report memory map of a process. popd : Removes directory on the head of the directory stack and takes you to the new directory on the head. portmap : Converts RPC program numbers to IP port numbers. poweroff : Shuts down the machine. pppd : Point-to-point protocol daemon. pr : Convert (column or paginate) text files for printing. praliases : Prints the current system mail aliases. printcap : Printer capability database. printenv : Show values of all or specified environment variables. printf : Show arguments formatted according to a specified format. ps : Report a snapshot of the current processes. ptx : Produce a permuted index of file contents. pushd : Appends a given directory name to the head of the stack and then cd to the given directory. pv : Monitor progress of data through a pipe. pwck : Verify integrity of password files. pwconv : Creates shadow from passwd and an optionally existing shadow. pwd : Show current directory. python Linux Commands – Q quota : Shows disk usage, and space limits for a user or group. Without arguments, only shows user quotas. quotacheck : Used to scan a file system for disk usage. quotactl : Make changes to disk quotas. quotaoff : Enable enforcement of filesystem quotas. quotaon : Disable enforcement of filesystem quotas. quotastats : Shows the report of quota system statistics gathered from the kernel. Linux Commands – R raidstart : Start/stop RAID devices. ram : RAM disk device used to access the RAM disk in raw mode. ramsize : Show usage information for the RAM disk. ranlib : Generate index to the contents of an archive and store it in the archive. rar : Create and manage RAR file in Linux. rarpd : Respond to Reverse Address Resoultion Protocol (RARP) requests. rcp : Remote copy command to copy files between remote computers. rdate : Set system date and time by fetching information from a remote machine. rdev : Set or query RAM disk size, image root device, or video mode. rdist : Remote file distribution client, maintains identical file copies over multiple hosts. rdistd : Start the rdist server. read : Read from a file descriptor. readarray : Read lines from a file into an array variable. readcd : Read/write compact disks. readelf : Shows information about ELF (Executable and Linkable fomrat) files. readlink : Display value of a symbolic link or canonical file name. readonly : Mark functions and variables as read-only. reboot : Restart the machine. reject : Accept/reject print jobs sent to a specified destination. remsync : Synchronize remote files over email. rename : Rename one or more files. renice : Change priority of active processes. repquota : Report disk usage and quotas for a specified filesystem. reset : Reinitialize the terminal. resize2fs : Used to resize ext2/ext3/ext4 file systems. restore : Restore files from a backup created using dump. return : Exit a shell function. rev : Show contents of a file, reversing the order of characters in every line. rexec : Remote execution client for exec server. rexecd : Remote execution server. richtext : View “richtext” on an ACSII terminal. rlogin : Used to connect a local host system with a remote host. rlogind : Acts as the server for rlogin. It facilitates remote login, and authentication based on privileged port numbers from trusted hosts. rm : Removes specified files and directories (not by default). rmail : Handle remote mail received via uucp. rmdir : Used to remove empty directories. rmmod : A program to remove modules from Linux kernel. rndc : Name server control utility. Send command to a BIND DNS server over a TCP connection. rootflags : Show/set flags for the kernel image. route : Show/change IP routing table. routed : A daemon, invoked at boot time, to manage internet routing tables. rpcgen : An RPC protocol compiler. Parse a file written in the RPC language. rpcinfo : Shows RPC information. Makes an RPC call to an RPC server and reports the findings. rpm : A package manager for linux distributions. Originally developed for RedHat Linux. rsh : Remote shell. Connects to a specified host and executes commands. rshd : A daemon that acts as a server for rsh and rcp commands. rsync : A versitile to for copying files remotely and locally. runlevel : Shows previous and current SysV runlevel. rup : Remote status display. Shows current system status for all or specified hosts on the local network. ruptime : Shows uptime and login details of the machines on the local network. rusers : Shows the list of the users logged-in to the host or on all machines on the local network. rusersd : The rsuerd daemon acts as a server that responds to the queries from rsuers command. rwall : Sends messages to all users on the local network. rwho : Reports who is logged-in to the hosts on the local network. rwhod : Acts as a server for rwho and ruptime commands. Linux Commands – S sane-find-scanner : Find SCSI and USB scanner and determine their device files. scanadf : Retrieve multiple images from a scanner equipped with an automatic document feeder (ADF). scanimage : Read images from image aquistion devices (scanner or camera) and display on standard output in PNM (Portable aNyMap) format. scp : Copy files between hosts on a network securely using SSH. screen : A window manager that enables multiple pseudo-terminals with the help of ANSI/VT100 terminal emulation. script : Used to make a typescript of everything displayed on the screen during a terminal session. sdiff : Shows two files side-by-side and highlights the differences. sed : Stream editor for filtering and transforming text (from a file or a pipe input). select : Synchronous I/O multiplexing. sendmail : It’s a mail router or an MTA (Mail Transfer Agent). sendmail support can send a mail to one or more recepients using necessary protocols. sensors : Shows the current readings of all sensor chips. seq : Displays an incremental sequence of numbers from first to last. set : Used to manipulate shell variables and functions. setfdprm : Sets floppy disk parameters as provided by the user. setkeycodes : Load kernel scancode-to-keycode mapping table entries. setleds : Show/change LED light settings of the keyboard. setmetamode : Define keyboard meta key handling. Without arguments, shows current meta key mode. setquota : Set disk quotas for users and groups. setsid : Run a program in a new session. setterm : Set terminal attributes. sftp : Secure File Transfer program. sh : Command interpreter (shell) utility. sha1sum : Compute and check 160-bit SHA1 checksum to verify file integrity. shift : Shift positional parameters. shopt : Shell options. showkey : Examines codes sent by the keyboard displays them in printable form. showmount : Shows information about NFS server mount on the host. shred : Overwrite a file to hide its content (optionally delete it), making it harder to recover it. shutdown : Power-off the machine. size : Lists section size and the total size of a specified file. skill : Send a signal to processes. slabtop : Show kernel slab cache information in real-time. slattach : Attack a network interface to a serial line. sleep : Suspend execution for a specified amount of time (in seconds). slocate : Display matches by searching filename databases. Takes ownership and file permission into consideration. snice : Reset priority for processes. sort : Sort lines of text files. source : Run commands from a specified file. split : Split a file into pieces of fixed size. ss : Display socket statistics, similar to netstat. ssh : An SSH client for logging in to a remote machine. It provides encrypted communication between the hosts. ssh-add : Adds private key identities to the authentication agent. ssh-agent : It holds private keys used for public key authentication. ssh-keygen : It generates, manages, converts authentication keys for ssh. ssh-keyscan : Gather ssh public keys. sshd : Server for the ssh program. stat : Display file or filesystem status. statd : A daemon that listens for reboot notifications from other hosts, and manages the list of hosts to be notified when the local system reboots. strace : Trace system calls and signals. strfile : Create a random access file for storing strings. strings : Search a specified file and prints any printable strings with at least four characters and followed by an unprintable character. strip : Discard symbols from object files. stty : Change and print terminal line settings. su : Change user ID or become superuser. sudo : Execute a command as superuser. sum : Checksum and count the block in a file. suspend : Suspend the execution of the current shell. swapoff : Disable devices for paging and swapping. swapon : Enable devices for paging and swapping. symlink : Create a symbolic link to a file. sync : Synchronize cached writes to persistent storage. sysctl : Configure kernel parameters at runtime. sysklogd : Linux system logging utilities. Provides syslogd and klogd functionalities. syslogd : Read and log system messages to the system console and log files. Linux Commands – T tac : Concatenate and print files in reverse order. Opposite of cat command. tail : Show the last 10 lines of each specified file(s). tailf : Follow the growth of a log file. (Deprecated command) talk : A two-way screen-oriented communication utility that allows two user to exchange messages simulateneously. talkd : A remote user communication server for talk. tar : GNU version of the tar archiving utility. Used to store and extract multiple files from a single archive. taskset : Set/retrieve a process’s CPU affinity. tcpd : Access control utility for internet services. tcpdump : Dump traffic on network. Displays a description of the contents of packets on a network interface that match the boolean expression. tcpslice : Extract pieces of tcpdump files or merge them. tee : Read from standard input and write to standard output and files. telinit : Change SysV runlevel. telnet : Telnet protocol user interface. Used to interact with another host using telnet. telnetd : A server for the telnet protocol. test : Check file type and compare values. tftp : User interface to the internet TFTP (Trivial File Transfer Protocol). tftpd : TFTP server. time : Run programs and summarize system resource usage. timeout : Execute a command with a time limit. times : Shows accumulated user and system times for the shell and it’s child processes. tload : Shows a graph of the current system load average to the specified tty. tmpwatch : Recursively remove files and directories which haven’t been accessed for the specified period of time. top : Displays real-time view of processes running on the system. touch : Change file access and modification times. tput : Modify terminal-dependent capabilities, color, etc. tr : Translate, squeeze, or delete characters from standard input and display on standard output. tracepath : Traces path to a network host discovering MTU (Maximum Transmission Unit) along this path. traceroute : Traces the route taken by the packets to reach the network host. trap : Trap function responds to hardware signals. It defines and creates handlers to run when the shell receives signals. troff : The troff processor of the groff text formatting system. TRUE : Exit with a status code indicating success. tset : Initialize terminal. tsort : Perform topological sort. tty : Display the filename of the terminal connected to standard input. tune2fs : Adjust tunable filesystem parameters on ext2/ext3/ext4 filesystems. tunelp : Set various parameters for the line printer devices. type : Write a description for a command type. Linux Commands – U ul : Underline text. ulimit : Get and set user limits for the calling process. umask : Set file mode creation mask. umount : Unmount specified file systems. unalias : Remove alias definitions for specified alias names. uname : Show system information. uncompress Uncompress the files compressed with the compress command. unexpand : Convert spaces to tabs for a specified file. unicode_start : Put keyboard and console in Unicode mode. unicode_stop : Revert keyboard and console from Unicode mode. uniq : Report or omit repeating lines. units : Convert units from one scalar to another. unrar : Extract files from a RAR archive. unset : Remove variable or function names. unshar : Unpack shell archive scripts. until : Execute command until a given condition is true. uptime : Tell how long the system has been running. useradd : Create a new user or update default user information. userdel : Delete a user account and related files. usermod : Modify a user account. users : Show the list of active users on the machine. usleep : Suspend execution for microsecond intervals. uudecode : Decode a binary file. uuencode : Encode a binary file. uuidgen : Created a new UUID (Universally Unique Identifier) table. Linux Commands – V vdir : Same as ls -l -b. Verbosely list directory contents. vi : A text editor utility. vidmode : Set the video mode for a kernel image. Displays current mode value without arguments. Alternative: rdev -v vim : Vi Improved, a text-based editor which is a successor to vi. vmstat Shows information about processes, memory, paging, block IO, traps, disks, and CPU activity. volname : Returns volume name for a device formatted with an ISO-9660 filesystem. For example, CD-ROM. Linux Commands – W w : Show who is logged-on and what they’re doing. wait : Waits for a specified process ID(s) to terminate and returns the termination status. wall : Display a message on the terminals all the users who are currently logged-in. warnquota : Send mail to the users who’ve exceeded their disk quota soft limit. watch : Runs commands repeatedly until interrupted and shows their output and errors. wc : Print newline, word, and byte count for each of the specified files. wget : A non-interactive file download utility. whatis : Display one line manual page descriptions. whereis : Locate the binary, source, and man page files for a command. which : For a given command, lists the pathnames for the files which would be executed when the command runs. while : Conditionally execute commands (while loop). who : Shows who is logged on. whoami : Displays the username tied to the current effective user ID. whois : Looks for an object in a WHOIS database write : Display a message on other user’s terminal. Linux Commands – X xargs : Runs a command using initial arguments and then reads remaining arguments from standard input. xdg-open : Used to open a file or URL in an application preferred by the user. xinetd : Extended internet services daemon. Works similar to inetd. xz : Compress/ Decompress .xz and .lzma files. Linux Commands – Y yacc : Yet Another Compiler Compiler, a GNU Project parser generator. yes : Repeatedly output a line with a specified string(s) until killed. ypbind : A daemon that helps client processes to connect to an NIS server. ypcat : Shows the NIS map (or database) for the specified MapName parameter. ypinit : Sets up NIS maps on an NIS server. ypmatch : Shows values for specified keys from an NIS map. yppasswd : Change NIS login password. yppasswdd : Acts as a server for the yppasswd command. Receives and executes requests. yppoll : Shows the ID number or version of NIS map currently used on the NIS server. yppush : Forces slave NIS servers to copy updated NIS maps. ypserv : A daemon activated at system startup. It looks for information in local NIS maps. ypset : Point a client (running ypbind) to a specifc server (running ypserv). yptest : Calls various functions to check the configuration of NIS services. ypwhich : Shows the hostname for NIS server or master server for a given map. ypxfr : Transfers NIS server map from server to a local host. Linux Commands – Z zcat : Used to compress/uncompress files. Similar to gzip zcmp : Compare compressed files. zdiff : Compare compressed files line by line. zdump : Displays time for the timezone mentioned. zforce : Adds .gz extension to all gzipped files. zgrep : Performs grep on compressed files. zic : Creates time conversion information files using the specified input files. zip : A file compression and packaging utility. zless : Displays information of a compressed file (using less command) on the terminal one screen at a time. zmore : Displays output of a compressed file (using more command) on the terminal one page at a time. znew : Recompress .z files to .gz.
http://www.educratsweb.com/content.php?id=1537
CC-MAIN-2021-43
refinedweb
6,433
62.44
A brief summary of Clean Code. In this post you will find a lot of things from the book and a bit more and how it could be applied to TypeScript. Meaningful Names / Avoid mental mapping Let the variable name tell you what it represents, don’t name it something that will make you constantly look up its definition to know what it is. Can you spot the problem? ❌ Wrong const d1 = d.getBd() const d2 = d.getMd() In the example above, you cannot really tell what d is and what getBd() and getMd() return. To find you you would have to find the definition and keep that information in mind. This is specially bad when you are a developer that didn’t originally write the code and have to make changes on it. To avoid the mental mapping you can use meaningful names ✔️ Right const birthDate = user.getBirthDate() const mediumDate = user.getMediumCreatedDate() This way there is not need to look it up, you know what they mean by just reading it. Use intention revealing names Make it clear what is the intention of your variable, don’t omit the intention. Can you spot the problem? ❌ Wrong const days = 30 Without context it doesn’t say much, it could be a lot of different things, instead you could reveal the intention ✔️ Right const daysUntilNextYear = 30 This makes clear what the variable actually represents. Avoid disinformation Make it clean and precise what the variable/method is used for, don’t give it names that might mislead developers into thinking it might do something other than it says. Can you spot the problem? ❌ Wrong window.addEventListener('move', beforeMove) The function to handle move is called beforeMove, but that is misleading, the event is triggered during move, not before. Use a name that reflects what it is actually doing ✔️ Right window.addEventListener('move', onMove) This way the event name doesn’t mislead the developer into thinking it executes in a different situation. Make meaningful distinction Give variable names that make it more accurate, avoid creating variables that could have multiple meanings. Can you spot the problem? ❌ Wrong const money = 123 This variable without context doesn’t tell us exactly which currency it’s referring to, instead we can make it clear by making it more specific ✔️ Right const moneyInUSD = 123 This way you can easily see the difference between this variable and other money variables, for example before a conversion it could be money in rupees. Use pronounceable names Use names that can be pronounced, this way it makes discussing the implementation with another developer much easier. Can you spot the problem? ❌ Wrong const srcImg = '' const dstImg = '' Imagine two developers pair programming, it would be awkward to talk saying unpronounceable names. ✔️ Right const sourceImage = '' const destinationImage = '' By naming variables with pronounceable names it makes talking about it much easier, and even improves code readability. Use searchable names We often try to search the code for a piece of code that does something we are expecting, so by adding searchable names it makes it much easier to find things. Can you spot the problem? ❌ Wrong function renderFiter() Typos are an issue too, you might be looking for renderFilter but if it’s misspelled you might not find it or have a hard time to find it. Be careful with typos and try to use the simplest term so another developer will try to search for the same term. ✔️ Right function renderFilter() This works because it’s correctly spelled and it’s using a simple term, so it’s easy to look for. Hungarian notation This is something that was needed decades ago because the compiler would use the prefix on the variable name to determine the type. Nowadays this is completely irrelevant. Can you spot the problem? ❌ Wrong const sName = '' s in name doesn’t add any useful information for the developer and just adds unnecessary noise. ✔️ Right const name = '' Compilers nowadays don’t need the variable type in the name and there are tools to inform the developer the type of the variable they are using. Use one word per concept When you need to perform an operation, for example a GET request to an API you could write different verbs in your method name, but you should stick to a single word for that concept. Can you spot the problem? ❌ Wrong function getUsers() function fetchCompanies() function listColors() In the example above it’s using 3 different terms to refer to the same thing, this makes searching harder and the code inconsistent. Instead you could pick one word and stick to it across your application. ✔️ Right function fetchUsers() function fetchCompanies() function fetchColors() This way it will be consistent, when you make and GET API call you can search for what you want using fetch###. Add meaningful context Context is important to determine the meaning of a variable, if you have a variable money without context you can’t really tell what is the currency. Can you spot the problem? ❌ Wrong const money = 123 There is not context to tell us what the currency is, this could be fixed by specifying more the variable name or wrapping the variable in a context. ✔️ Right const credits = { currency: '$', money: 123, } This way both the currency and money are in the same context, so it’s easy to tell exactly how much that amount is. Functions Function should be as easy to read as a note, if you start having to “compile” the code in your head and read in different conditions then something is wrong. Can you spot the problem? ❌ Wrong function parseEffect(jump = false, rotate = true, fade = true) { if (jump && rotate) { } else if (jump && fade) { } else if (rotate && fade) { } else ... } This function has a lot of boolean parameters, which is going to increase the cyclomatic complexity of the function a lot, making it really hard to read. What could be done instead is break it in multiple smaller functions ✔️ Right function parseJump() function parseRotate() function parseFade() With smaller functions you can break a big problem in smaller problems, making it easier to understand and even implement. Do one thing / Have no side effects Functions should do a single thing to avoid causing side-effects (things that you don’t expect to happen in that function) Can you spot the problem? ❌ Wrong function saveUser(user) { database.start() database.users.save(user) } In the example above you cannot save an user without initializing a database connection, this might cause an unexpected behavior. Instead your function should handle one single task. ✔️ Right function saveUser(user) { database.users.save(user) } This way there will be no side effects, the function is going to do only what it says it’s going to do. Function parameters The more parameters you have the more complex your function is going to be, you should always aim to keep a function as simple as possible. Flag parameters Flag parameters are specially bad because they create a new ramification of your function. Can you spot the problem? ❌ Wrong function showNotification(animate: boolean) In the example above your function is doing two things: showing the notification without animation and with animation. Also, if you see this code showNotification(true) Can you immediately tell what it is doing, what it means? Your functions should tell, not ask. ✔️ Right function showNotification() function showNotificationAnimated() This way it is very clear what the function is doing and in the middle of more code it makes the reading more fluid. Functions with two/three parameters Functions with multiple parameters increases the function cyclomatic complexity, it’s better to have functions tell instead of ask. Sometimes that’s required, for example: ✔️ Right const point = new Point(x, y) You need x and y to form a point. Functions with object arguments It’s ok to pass object parameters, in the end they represent one parameter each. Can you spot the problem? ❌ Wrong function distance(x1, y1, x2, y2) In the example above I’m not using objects, I’m passing 4 parameters, but in reality I want to pass 2 parameters (points). ✔️ Right interface Point { x: number; y: number } function distance(point1: Point, point2: Point) This way you logically group parameters and reduce the total of arguments your function has to handle. Comments are sometimes ok, they can help understand edge cases, warn about consequences. In many other cases it might be just noise, you should always explain yourself in code. Good: explain yourself in code Make your code legible instead of adding comments to explain it. Can you spot the problem? ❌ Wrong // Check to see if the employee is eligible for full benefits if ((employee.flags & HOURLY_FLAG) && (employee.age > 65)) The comment here is just noise, the code could explain itself. ✔️ Right if (employee.isEligibleForFullBenefits()) By extracting the condition to a variable/function/method you can make the code tell you what it does. Good: informative comments Comments explaining something complex that cannot be changed, for example a regex. Can you spot the problem? ❌ Wrong const customUrlPattern = /\/([\w-]*-)?(\d+)/ This is a bit hard to read, a comment could explain it briefly. ✔️ Right // matches /some-name-<id> url pattern const customUrlPattern = /\/([\w-]*-)?(\d+)/ The comment here makes “reading/debugging” the regex easier, so the comment is not irrelevant. Good: explanation of intent Sometimes you might have a piece of code that doesn’t really say much by itself, but a comment might help explain why that is there. Can you spot the problem? ❌ Wrong setTimeout(updateScreen, 1000) In the code above we can’t tell why the setTimeout is there just by reading the code. A comment would help developers understand why it is there. ✔️ Right // we are delaying execution due to a temporary bug // that is causing the screen to flicker setTimeout(updateScreen, 1000) This way you know why it is there. Good: warning about consequences Sometimes you might have a test that is skipping, or a piece of code that only executes under certain conditions and a comment to explain it would be helpful. Can you spot the problem? ❌ Wrong describe.skip('some test', () => { In the example above there’s not explanation why the test is being skipped, one could think it’s a bug. A comment explaining why would be helpful. ✔️ Right // this test takes 10m to execute, skipping on CI describe.only('some test', () => { This way it makes clear that the code is not an accident, it was intentionally put there for a reason and it warns about consequences of not skipping. TODO comments From my personal opinion: you should not use TODO comments, if there’s anything that needs to be done it should go on your task tracking system’s backlog. Otherwise in most cases it just gets stale and forgotten. BAD: Noisy comments Can you spot the problem? ❌ Wrong // user id const userId = 1 The variable name already states that it refers to the user id, there’s no need for a comment stating the same. ✔️ Right const userId = 1 This is just more clean and already tells you what it means. BAD: commenting code Don’t comment code, just delete it. Can you spot the problem? ❌ Wrong // function sum(arr) { // return arr.reduce((a, b) => a + b) // } Commented code doesn’t do anything more than occupy space on your file. If you even need that you can go back on your version control (e.g. Git) and restore it. ✔️ Right Delete unused code. It makes the code cleaner. Principles and techniques Some principles and techniques you can use. Don’t repeat yourself Also known as DRY, means that you shouldn’t have duplicated code. You can create a function and reuse it. Can you spot the problem? ❌ Wrong // home.tsx if (type === 'announcement') { return ( <div className="message-wrapper">...</div> ) }// portal.tsx if (type === 'announcement') { return ( <div className="message-wrapper">...</div> ) } A code that is doing the exact same things in two places is bad, you will have double the work to change, test, etc. This is known as WET (“write everything twice”, “we enjoy typing” or “waste everyone’s time”). Instead you could write a function and reuse in different places. ✔️ Right // announcement.tsx renderWarning() { if (type === 'announcement') { return ( <div className="message-wrapper">...</div> ) } }// home.tsx renderWarning()// portal.tsx renderWarning() This way you only have to maintain code in one place and any changes will be easier to implement. Keep It Simple, Stupid (KISS) Simple things are easier to maintain, easier for people to learn and use. Avoid unnecessary complexity. Albert Einstein once said something like “Make everything as simple as possible, but not simpler”. LIFT From Angular documentation: Locate code quickly, Identify the code at a glance, keep the Flattest structure you can, and Try to be DRY You Aren’t Gonna Need It (YAGNI) Deleting code is a good thing. Keep it simple and don’t add unnecessary stuff. Single Responsibility Methods/functions should only be responsible for executing a single task. Clean code is all about keeping the code simple, easy to read, easy to change. Disclaimer: I’m using ❌ Wrong and ✔️ Right just to make it clearer what is the preferred approach, it’s not a rule that says that the approach is absolutely right or wrong.
https://medium.com/@BrunoLM7/a-brief-summary-of-clean-code-c0c557739551?source=---------6------------------
CC-MAIN-2020-34
refinedweb
2,209
63.39
Hot questions for Using Neural networks in max pooling Question: gru_out = Bidirectional(GRU(hiddenlayer_num, return_sequences=True))(embedded) #Tensor("concat_v2_8:0", shape=(?, ?, 256), dtype=float32) I use Keras to create a GRU model.I want to gather information from all the node vectors of the GRU model, instead of the last node vector. For example, I need to get the maximum value of each vector, like the image description, but I have no idea on how to do this. Answer: One may use GlobalMaxPooling1D described here: gru_out = Bidirectional(GRU(hiddenlayer_num, return_sequences=True))(embedded) max_pooled = GlobalMaxPooling1D(gru_out) Question: im trying to fit the data with the following shape to the pretrained keras vgg19 model. image input shape is (32383, 96, 96, 3) label shape is (32383, 17) and I got this error expected block5_pool to have 4 dimensions, but got array with shape (32383, 17) at this line model.fit(x = X_train, y= Y_train, validation_data=(X_valid, Y_valid), batch_size=64,verbose=2, epochs=epochs,callbacks=callbacks,shuffle=True) Here's how I define my model model = VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(96,96,3),classes=17) How did maxpool give me a 2d tensor but not a 4D tensor ? I'm using the original model from keras.applications.vgg16. How can I fix this error? Answer: Your problem comes from VGG16(include_top=False,...) as this makes your solution to load only a convolutional part of VGG. This is why Keras is complaining that it got 2-dimensional output insted of 4-dimensional one (4 dimensions come from the fact that convolutional output has shape (nb_of_examples, width, height, channels)). In order to overcome this issue you need to either set include_top=True or add additional layers which will squash the convolutional part - to a 2d one (by e.g. using Flatten, GlobalMaxPooling2D, GlobalAveragePooling2D and a set of Dense layers - including a final one which should be a Dense with size of 17 and softmax activation function). Question: This question is a tough one: How can I feed a neural network, a dynamic input? Answering this question will certainly help the advance of modern AI using deep learning for applications other than computer vision and speech recognition. I will explain this problem further for the laymen on neural networks. Let's take this simple example for instance: Say you need to know the probability of winning, losing or drawing in a game of "tic-tac-toe". So my input could be a [3,3] matrix representing the state (1-You, 2-Enemy, 0-Empty): [2. 1. 0.] [0. 1. 0.] [2. 2. 1.] Let's assume we already have a previously trained hidden layer, a [3,1] matrix of weights: [1.5] [0.5] [2.5] So if we use a simple activation function that consists basically of a matrix multiply between the two y(x)=W*x we get this [3,1] matrix in the output: [2. 1. 0.] [1.5] [3.5] [0. 1. 0.] * [0.5] = [0.5] [2. 2. 1.] [2.5] [6.5] Even without a softmax function you can tell that the highest probability is of having a draw. But what if I want this same neural network to work for a 5x5 game of tic-tac-toe? It has the same logic as the 3x3, its just bigger. The neural network should be able to handle it We would have something like: [2. 1. 0. 2. 0.] [0. 2. 0. 1. 1.] [1.5] [?] [2. 1. 0. 0. 1.] * [0.5] = [?] IMPOSSIBLE [0. 0. 2. 2. 1.] [2.5] [?] [2. 1. 0. 2. 0.] But this multiplication would be impossible to compute. We would have to add more layers and/or change our previously trained one and RETRAIN it, because the untrained weights (initialized with 0 in this case) would cause the neural network to fail, like so: input 1st Layer output1 [2. 1. 0. 2. 0.] [0. 0. 0.] [6.5 0. 0.] [0. 2. 0. 1. 1.] [1.5 0. 0.] [5.5 0. 0.] [2. 1. 0. 0. 1.] * [0.5 0. 0.] = [1.5 0. 0.] [0. 0. 2. 2. 1.] [2.5 0. 0.] [6. 0. 0.] [2. 1. 0. 2. 0.] [0. 0. 0.] [6.5 0. 0.] 2nd Layer output1 final output [6.5 0. 0.] [5.5 0. 0.] [0. 0. 0. 0. 0.] * [1.5 0. 0.] = [0. 0. 0.] POSSIBLE [6. 0. 0.] [6.5 0. 0.] Because we expanded the first layer and added a new layer of zero weights, our result is obviously inconclusive. If we apply a softmax function we will realize that the neural network is returning 33.3% chance for every possible outcome. We would need to train it again. Obviously we want to create generic neural networks that can adapt to different input sizes, however I haven't thought of a solution for this problem yet! So I thought maybe stackoverflow can help. Thousands of heads think better than one. Any ideas? Answer: There are solutions for Convolutional Neural Networks apart from just resizing the input to a fixed size. Spatial Pyramid Pooling allows you to train and test CNNs with variable sized images, and it does this by introducing a dynamic pooling layer, where the input can be of any size, and the output is of a fixed size, which then can be fed to the fully connected layers. The pooling is very simple, one defines with a number of regions in each dimension (say 7x7), and then the layer splits each feature map in non-overlapping 7x7 regions and does max-pooling on each region, outputing a 49 element vector. This can also be applied at multiple scales. Question: I'm building a convolutional neural network with numpy, and I'm not sure that my pooling treatment of the 3D (HxWxD) input image is correct. As an example, I have an image shaped (12x12x3) I convolve it to (6x6x3), and I want to perform max pooling such that I obtain a (3x3x3) image. To do this, I choose a filter size of (2x2) and a stride of 2. output_size = int((conv.shape[0]-F)/S + 1) pool = np.zeros((output_size,output_volume,3)) # pool array for k in range(conv.shape[-1]): # loop over conv depth i_stride = 0 for i in range(output_size): j_stride = 0 for j in range(output_size): pool[i,j,k] = np.amax(conv[i_stride:i_stride+F, j_stride:j_stride+F,k],0) j_stride+=S i_stride+=S For the first channel of my convolution array conv[:,:,0] I obtain the following. Comparing this with the first channel of the max pooling array pool[:,:,0] I get. At a glance I can tell that the pooling operation is not correct, conv[0:2,0:2,0] (mostly gray) is most definitely not pool[0,0,0] (black), you'd expect it to be one of the shades of gray. So, I'm convinced that something is definitely wrong here. Either my for loop or the two comparisons I'm making are off. If anyone can help me better understand the pooling operation over the array with 3 dimensions, that will definitely help. Answer: Maximum pooling produces the same depth as it's input. With that in mind we can focus on a single slice (along depth) of the input conv. For a single slice at an arbitrary index, you have a simple image of NxN dimensions. You defined your filter size 2, and stride 2. Max pooling does nothing more than iterate over the input image and get the maximum over the current "subimage". import numpy as np F = 2 S = 2 conv = np.array( [ [ [[.5, .1], [.1, .0], [.2, .7], [.1, .3], [.0, .1], [.3, .8]], [[.0, .9], [.5, .7], [.3, .1], [.9, .2], [.8, .7], [.1, .9]], [[.1, .8], [.1, .2], [.6, .2], [.0, .3], [.1, .3], [.0, .8]], [[.0, .6], [.6, .4], [.2, .8], [.6, .8], [.9, .1], [.3, .1]], [[.3, .9], [.7, .6], [.7, .6], [.5, .4], [.7, .2], [.8, .1]], [[.1, .8], [.9, .3], [.2, .7], [.8, .4], [.0, .5], [.8, .0]] ], [ [[.1, .2], [.1, .0], [.5, .3], [.0, .4], [.0, .5], [.0, .6]], [[.3, .6], [.6, .4], [.1, .2], [.6, .2], [.2, .3], [.2, .4]], [[.2, .1], [.4, .2], [.0, .4], [.5, .6], [.7, .6], [.7, .2]], [[.0, .7], [.5, .3], [.4, .0], [.4, .6], [.2, .2], [.2, .7]], [[.0, .5], [.3, .0], [.3, .8], [.3, .2], [.6, .3], [.5, .2]], [[.6, .2], [.2, .5], [.5, .4], [.1, .0], [.2, .6], [.1, .8]] ] ]) number_of_images, image_height, image_width, image_depth = conv.shape output_height = (image_height - F) // S + 1 output_width = (image_width - F) // S + 1 pool = np.zeros((number_of_images, output_height, output_width, image_depth)) for k in range(number_of_images): for i in range(output_height): for j in range(output_width): pool[k, i, j, :] = np.max(conv[k, i*S:i*S+F, j*S:j*S+F, :]) print(pool[0, :, :, 0]) [[0.9 0.9 0.9] [0.8 0.8 0.9] [0.9 0.8 0.8]] print(pool[0, :, :, 1]) [[0.9 0.9 0.9] [0.8 0.8 0.9] [0.9 0.8 0.8]] print(pool[1, :, :, 0]) [[0.6 0.6 0.6] [0.7 0.6 0.7] [0.6 0.8 0.8]] print(pool[1, :, :, 1]) [[0.6 0.6 0.6] [0.7 0.6 0.7] [0.6 0.8 0.8]] It's not clear to me why you're using transpose of the max row for a single element in the pool. Question: I'm using Theano 0.7 to create a convolutional neural net which uses max-pooling (i.e. shrinking a matrix down by keeping only the local maxima). In order to "undo" or "reverse" the max-pooling step, one method is to store the locations of the maxima as auxiliary data, then simply recreate the un-pooled data by making a big array of zeros and using those auxiliary locations to place the maxima in their appropriate locations. Here's how I'm currently doing it: import numpy as np import theano import theano.tensor as T minibatchsize = 2 numfilters = 3 numsamples = 4 upsampfactor = 5 # HERE is the function that I hope could be improved def upsamplecode(encoded, auxpos): shp = encoded.shape upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor)) for whichitem in range(minibatchsize): for whichfilt in range(numfilters): upsampled = T.set_subtensor(upsampled[whichitem, whichfilt, auxpos[whichitem, whichfilt, :]], encoded[whichitem, whichfilt, :]) return upsampled totalitems = minibatchsize * numfilters * numsamples code = theano.shared(np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples))) auxpos = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) % upsampfactor # arbitrary positions within a bin auxpos += (np.arange(4) * 5).reshape((1,1,-1)) # shifted to the actual temporal bin location auxpos = theano.shared(auxpos.astype(np.int)) print "code:" print code.get_value() print "locations:" print auxpos.get_value() get_upsampled = theano.function([], upsamplecode(code, auxpos)) print "the un-pooled data:" print get_upsampled() (By the way, in this case I have a 3D tensor, and it's only the third axis that gets max-pooled. People who work with image data might expect to see two dimensions getting max-pooled.) The output is: code: [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] locations: [[[ 0 6 12 18] [ 4 5 11 17] [ 3 9 10 16]] [[ 2 8 14 15] [ 1 7 13 19] [ 0 6 12 18]]] the un-pooled data: [[[ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 3. 0.] [ 0. 0. 0. 0. 4. 5. 0. 0. 0. 0. 0. 6. 0. 0. 0. 0. 0. 7. 0. 0.] [ 0. 0. 0. 8. 0. 0. 0. 0. 0. 9. 10. 0. 0. 0. 0. 0. 11. 0. 0. 0.]] [[ 0. 0. 12. 0. 0. 0. 0. 0. 13. 0. 0. 0. 0. 0. 14. 15. 0. 0. 0. 0.] [ 0. 16. 0. 0. 0. 0. 0. 17. 0. 0. 0. 0. 0. 18. 0. 0. 0. 0. 0. 19.] [ 20. 0. 0. 0. 0. 0. 21. 0. 0. 0. 0. 0. 22. 0. 0. 0. 0. 0. 23. 0.]]] This method works but it's a bottleneck, taking most of my computer's time (I think the set_subtensor calls might imply cpu<->gpu data copying). So: can this be implemented more efficiently? I suspect there's a way to express this as a single set_subtensor() call which may be faster, but I don't see how to get the tensor indexing to broadcast properly. UPDATE: I thought of a way of doing it in one call, by working on the flattened tensors: def upsamplecode2(encoded, auxpos): shp = encoded.shape upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor)) add_to_flattened_indices = theano.shared(np.array([ [[(y + z * numfilters) * numsamples * upsampfactor for x in range(numsamples)] for y in range(numfilters)] for z in range(minibatchsize)], dtype=theano.config.floatX).flatten(), name="add_to_flattened_indices") upsampled = T.set_subtensor(upsampled.flatten()[T.cast(auxpos.flatten() + add_to_flattened_indices, 'int32')], encoded.flatten()).reshape(upsampled.shape) return upsampled get_upsampled2 = theano.function([], upsamplecode2(code, auxpos)) print "the un-pooled data v2:" ups2 = get_upsampled2() print ups2 However, this is still not good efficiency-wise because when I run this (added on to the end of the above script) I find out that the Cuda libraries can't currently do the integer index manipulation efficiently: ERROR (theano.gof.opt): Optimization failure due to: local_gpu_advanced_incsubtensor1 ERROR (theano.gof.opt): TRACEBACK: ERROR (theano.gof.opt): Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/theano/gof/opt.py", line 1493, in process_node replacements = lopt.transform(node) File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/opt.py", line 952, in local_gpu_advanced_incsubtensor1 gpu_y = gpu_from_host(y) File "/usr/local/lib/python2.7/dist-packages/theano/gof/op.py", line 507, in __call__ node = self.make_node(*inputs, **kwargs) File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/basic_ops.py", line 133, in make_node dtype=x.dtype)()]) File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/type.py", line 69, in __init__ (self.__class__.__name__, dtype, name)) TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using dtype int64 for variable None Answer: I don't know whether this is faster, but it may be a little more concise. See if it is useful for your case. import numpy as np import theano import theano.tensor as T minibatchsize = 2 numfilters = 3 numsamples = 4 upsampfactor = 5 totalitems = minibatchsize * numfilters * numsamples code = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) auxpos = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) % upsampfactor auxpos += (np.arange(4) * 5).reshape((1,1,-1)) # first in numpy shp = code.shape upsampled_np = np.zeros((shp[0], shp[1], shp[2] * upsampfactor)) upsampled_np[np.arange(shp[0]).reshape(-1, 1, 1), np.arange(shp[1]).reshape(1, -1, 1), auxpos] = code print "numpy output:" print upsampled_np # now the same idea in theano encoded = T.tensor3() positions = T.tensor3(dtype='int64') shp = encoded.shape upsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor)) upsampled = T.set_subtensor(upsampled[T.arange(shp[0]).reshape((-1, 1, 1)), T.arange(shp[1]).reshape((1, -1, 1)), positions], encoded) print "theano output:" print upsampled.eval({encoded: code, positions: auxpos}) Question: Very similar to this question but for average pooling. The accpeted answer says, that same pooling uses -inf as padding for maxpooling. But what is used for average pooling. Do they use just 0? Answer: Ok I just testef it out myself. np.set_printoptions(threshold=np.nan) x = np.array([[[3.0,3.0,3.0],[3.0,3.0,3.0]]]) x = x.reshape(1,2,3,1) sess = tf.Session() K.set_session(sess) b = K.constant(x) b = AveragePooling2D(pool_size=(2, 2), padding="same")(b) b = tf.Print(b,[b]) sess.run(b) This returns the tensor [[[[3][3]]]] so it has to pad with 0.
https://thetopsites.net/projects/neural-network/max-pooling.shtml
CC-MAIN-2021-31
refinedweb
2,660
63.19
Posted in Mobile, jQuery, JavaScript, ColdFusion | Posted on 07-13-2011 | 14,525 views So about a week or so ago I had an idea about a simple jQuery Mobile application that would make use of Local Storage. That was a week ago. Turns out - the "simple" application turned out to be a royal pain in the rear. Not because of Local Storage, but because of some misconceptions and lack of knowledge on my part in jQuery Mobile. What followed was a couple painful days (and more than a few curse words) but after all of that, I feel like I've got a better understanding of jQuery Mobile and got to play with some new features. So with that being said, let's get to the app. My idea was a rather simple one. Given a collection of art, allow the user to browse categories and view individual pieces of art. I've done this before as a jQuery Mobile example. But what I thought would be interesting is to add a simple "Favorites" system. As you browse through the art you can select a piece you like, add it to your favorites, and then later have a quicker way to access them. To make things even more interesting, I thought I'd make use of Local Storage. Local Storage is an HTML5 feature, and unfortunately, it isn't quite as sexy as Canvas so it doesn't get as many cool demos. But it's one of those - you know - useful things that is actually pretty well supported. Local Storage is basically a key system of data. You can store, on the browse, a key and a value. Like name="Raymond". Unlike cookies, this data is not sent to the server on every request. Rather, it just sits there on the client ready to be used by JavaScript. You've got access to both a permanent (localStorage) and session based (sessionStorage) API. The excellent DiveIntoHTML5 talks about Local Storage here. I won't talk any more about the API as it's rather quite simple and the Dive site explains it more than well enough. Before getting into this version though, let's quickly look at the initial, simpler version. My application consists of three HTML files, all powered by ColdFusion. The home page will list categories, the category page will list art, and the detail page will show just an art piece. Let's start with the index page. 2 3<!DOCTYPE html> 4<html> 5 <head> 6 <title>Art Browser</title> 7 <link rel="stylesheet" href="" /> 8 <script src=""></script> 9 <script src=""></script> 10</head> 11<body> 12 13<div data- 14 15 <div data- 16 <h1>Art Browser</h1> 17 </div> 18 19 <div data- 20 <ul data- 21 <cfoutput query="categories"> 22 <li><a href="category.cfm?id=#mediaid#&media=#urlEncodedFormat(mediatype)#">#mediatype#</a> <span class="ui-li-count">#total#</span></li> 23 </cfoutput> 24 </ul> 25 </div> 26 27</div> 28 29</body> 30</html> Note that I begin by asking for media types. Our database categories art by a media type and I'll be considering that my categories. The getMediaTypes method returns a query which means I can simply loop over it in my content. Next up we have the category page - which is really just a slightly different version of the last one. Note though the use of the Home icon. 2<cfparam name="url.id" default=""> 3<cfset art = application.artservice.getArt(mediatype=url.id)> 4 5<!DOCTYPE html> 6<html> 7 <head> 8 <cfoutput><title>Art Category - #url.media#</title></cfoutput> 9 <link rel="stylesheet" href="" /> 10 <script src=""></script> 11 <script src=""></script> 12</head> 13<body> 14 15<div data- 16 17 <cfoutput> 18 <div data- 19 <a href="index.cfm" data-Home</a> 20 <h1>#url.media#</h1> 21 </div> 22 </cfoutput> 23 24 <div data- 25 <cfif art.recordCount> 26 <ul data- 27 <cfoutput query="art"> 28 <li><a href="art.cfm?id=#artid#">#artname#</a></li> 29 </cfoutput> 30 </ul> 31 <cfelse> 32 Sorry, no art in this category. 33 </cfif> 34 </div> 35 36 37</div> 38 39</body> 40</html> And finally, let's look at our detail page. 2<cfset art = application.artservice.getArtPiece(url.id)> 3 4<!DOCTYPE html> 5<html> 6 <head> 7 <cfoutput><title>Art - #art.name#</title></cfoutput> 8 <link rel="stylesheet" href="" /> 9 <script src=""></script> 10 <script src=""></script> 11</head> 12<body> 13 14<div data- 15 16 <cfoutput> 17 <div data- 18 <a href="index.cfm" data-Home</a> 19 <h1>#art.name#</h1> 20 </div> 21 </cfoutput> 22 23 <cfoutput> 24 <div data- 25 <b>Artist: </b> #art.artist#<br/> 26 <b>Price: </b> #dollarFormat(art.price)#<br/> 27 #art.description# 28 <p/> 29 <img src="#art.image#"> 30 </div> 31 </cfoutput> 32 33</div> 34 35</body> 36</html> This page is even simpler. We just get the art detail and render it within the page. Nothing fancy at all - not yet anyway. You can demo this here: Ok, ready to go crazy? I decided on two main changes to my application. First, art pieces would have a new button, Add to Favorites (or Remove from Favorites). Once clicked, I'd use a jQuery Mobile dialog to prompt the user if they were sure. (Normally I hate crap like that. Don't second guess me. But I wanted to try dialogs in jQuery Mobile.) If the user confirms the action, I then simply update local storage to store the value. Since you can only store simple values, I used built in JSON features to store complex data about the art piece (really just the ID and name). On the home page, I had, what I thought, was a simple thing to do. When the page loads, simply fill out a dynamic list of the user favorites. Here's where things really took a turn for the worst for me. I want to give a huge shout out to user aaraonpadoshek who helped me out on the jQuery Mobile forums. I'll show the new home page and explain what changed. 2 3<!DOCTYPE html> 4<html> 5 <head> 6 <meta name="viewport" content="width=device-width, initial-scale=1"> 7 <title>Art Browser</title> 8 <link rel="stylesheet" href="" /> 9 <script src=""></script> 10 <script src=""></script> 11 <script> 12 //Credit: 13 function supports_html5_storage() { 14 try { 15 return 'localStorage' in window && window['localStorage'] !== null; 16 } catch (e) { 17 return false; 18 } 19 } 20 function supports_json() { 21 try { 22 return 'JSON' in window && window['JSON'] !== null; 23 } catch (e) { 24 return false; 25 } 26 } 27 28 $(document).ready(function() { 29 30 //only bother if we support storage 31 if (supports_html5_storage() && supports_json()) { 32 33 //when art detail pages load, show button 34 $('div.artDetail').live('pageshow', function(event, ui){ 35 //which do we show? 36 var id = $(this).data("artid"); 37 if (!hasInStorage(id)) { 38 $(".addToFavoritesDiv").show(); 39 $(".removeFromFavoritesDiv").hide(); 40 } 41 else { 42 $(".addToFavoritesDiv").hide(); 43 $(".removeFromFavoritesDiv").show(); 44 } 45 }); 46 47 //When clicking the link in details pages to add to fav 48 $(".addToFavoritesDiv a").live('vclick', function(event) { 49 var id=$(this).data("artid"); 50 $.mobile.changePage("addtofav.cfm", {role:"dialog",data:{"id":id}}); 51 }); 52 53 //When clicking the link in details pages to add to fav 54 $(".removeFromFavoritesDiv a").live('vclick', function(event) { 55 var id=$(this).data("artid"); 56 $.mobile.changePage("removefromfav.cfm", {role:"dialog",data:{"id":id}}); 57 }); 58 59 //When confirming the add to fav 60 $('.addToFavoritesButton').live('vclick', function(event, ui){ 61 var id=$(this).data("artid"); 62 var label=$(this).data("artname"); 63 addToStorage(id,label); 64 $("#addToFavoritesDialog").dialog("close"); 65 }); 66 67 //When confirming the remove from fav 68 $('.removeFromFavoritesButton').live('vclick', function(event, ui){ 69 var id=$(this).data("artid"); 70 var label=$(this).data("artname"); 71 removeFromStorage(id,label); 72 $("#removeFromFavoritesDialog").dialog("close"); 73 }); 74 75 76 $('#homePage').live('pagebeforeshow', function(event, ui){ 77 //get our favs 78 var favs = getStorage(); 79 var $favoritesList = $("#favoritesList"); 80 if (!$.isEmptyObject(favs)) { 81 if ($favoritesList.size() == 0) { 82 $favoritesList = $('<ul id="favoritesList" data-</ul>'); 83 84 varFavorites</li>"; 85 for (var key in favs) { 86 s+= "<li><a href=\"art.cfm?"+favs[key]+"</a></li>"; 87 } 88 $favoritesList.append(s); 89 $("#homePageContent").append($favoritesList); 90 $favoritesList.listview(); 91 } else { 92 $favoritesList.empty(); 93 varFavorites</li>"; 94 for (var key in favs) { 95 s+= "<li><a href=\"art.cfm?"+favs[key]+"</a></li>"; 96 } 97 $favoritesList.append(s); 98 $favoritesList.listview("refresh"); 99 } 100 } else { 101 // remove list if it exists and there are no favs 102 if($favoritesList.size() > 0) $favoritesList.remove(); 103 } 104 }); 105 106 //Adding to storage 107 function addToStorage(id,label){ 108 if (!hasInStorage(id)) { 109 var data = getStorage(); 110 data[id] = label; 111 saveStorage(data); 112 } 113 } 114 115 //loading from storage 116 function getStorage(){ 117 var current = localStorage["favorites"]; 118 var data = {}; 119 if(typeof current != "undefined") data=window.JSON.parse(current); 120 return data; 121 } 122 123 //Checking storage 124 function hasInStorage(id){ 125 return (id in getStorage()); 126 } 127 128 //Adding to storage 129 function removeFromStorage(id,label){ 130 if (hasInStorage(id)) { 131 var data = getStorage(); 132 delete data[id]; 133 console.log('removed '+id); 134 saveStorage(data); 135 } 136 } 137 138 //save storage 139 function saveStorage(data){ 140 console.log("To store..."); 141 console.dir(data); 142 localStorage["favorites"] = window.JSON.stringify(data); 143 } 144 145 } 146 147 }); 148 </script> 149 150</head> 151<body> 152 153<div data- 154 155 <div data- 156 <h1>Art Browser</h1> 157 </div> 158 159 <div data- 160 <ul data- 161 <cfoutput query="categories"> 162 <li><a href="category.cfm?id=#mediaid#&media=#urlEncodedFormat(mediatype)#">#mediatype#</a> <span class="ui-li-count">#total#</span></li> 163 </cfoutput> 164 </ul> 165 166 </div> 167 168</div> 169 170</body> 171</html> Ok - a bit more going on here. I'll take it step by step. On top I've got two utility functions taken based on code from the DiveIntoHTML5 site. One checks for local storage support and one for JSON. It's probably overkill for mobile, but it doesn't hurt. Notice that I check both of these functions before I do anything else. It occurs to me that I wrapped up a lot of code in that IF and I should have simply exited the document.ready event handler instead. I begin by using the "pageshow" event for my art detail page to decide if I should show the "Add to" or "Remove from" buttons. The hasInStorage function is defined later on and is just a utility I wrote for my code to quickly see if a particular art piece is favorited. I'll show that art page in a bit so you can see the HTML differences. The next two functions listen for clicks on the new buttons. Notice the "vclick" listener. This is not - as far as I know - actually documented. At least 5 of my gray hairs this week came from this. Apparently this is the new way to listen in for click events on multiple devices. It's in the jQuery Mobile blog, but again, it isn't documented. When I went live and tested my code, it had worked fine in Chrome but not at all in iOS or Android. Apparently this is why. Very frustrating! Notice - when you click, I use the built in changePage utility to load a page. But this is the cool thing - I can turn this into a dialog by adding a role attribute. So basically - addtofav.cfm and removefromfav.cfm are normal pages - but because of how I tell jQuery Mobile to load them, turn turn into dialogs. Sweet. Moving down - the next two event handlers are for the actual confirmations. Nothing special there. They call my utility functions defined later on to change local storage values. Ok - so here is the part I really struggled with and where aaraonpadoshek helped. I needed a way to say, "When the page loads, write out the list." Unfortunately, the pageshow event, which runs every time, also runs before the page initializes. Read that again - it runs every time the page shows and also before it's even fully drawn. There's a pageinit method which does run after the page initializes but only runs once. So when I used pageshow and tried to change my list, I got an error because jQuery Mobile hadn't added the magical unicorn dust yet to make it pretty. When I used pageinit it worked... once. Here's where Aaron's code helped. Notice we have pagebeforeshow being listened for now. It now detects in the list exists in the DOM. If it doesn't, we create it and initialize it ourselves as a list view. If it does exist, we update it using refresh. I'll be honest and say this still is a bit... fuzzy... in my mind. But it works! And that's good enough for me. I've got a bit of DRY going on there with the display but I'll fix that later. Moving down - you can now see my functions for working with local storage. To be honest, it's all pretty trivial. I've got a function to add and remove, to check for existence, and to get and persist. I added wrappers for them because I'm using JSON to store the data. Now let's look at the update to art.cfm: 2<div class="removeFromFavoritesDiv" style="display:none"><a href="" data-Remove from Favorites</a></div> That's the two buttons. Notice they are both hidden by default. Also note the use of data-artid to store in the primary key I'll use later. Now let's look at addtofav.cfm. I won't bother with the remove as it's pretty much the same. 2<cfset art = application.artservice.getArtPiece(url.id)> 3 4<!DOCTYPE html> 5<html> 6 <head> 7 <title>Add to Favorites?</title> 8 <link rel="stylesheet" href="" /> 9 <script src=""></script> 10 <script src=""></script> 11</head> 12<body> 13 14<div data- 15 16 <div data- 17 <h1>Add to Favorites?</h1> 18 </div> 19 20 <div data- 21 <p> 22 <cfoutput> 23 <a href="" data-Yes!</a> 24 <a href="art.cfc?id=#url.id#" data-No thank you</a> 25 </cfoutput> 26 </p> 27 </div> 28 29</div> 30 31</body> 32</html> Nothing fancy here either. Just simple content with some buttons. Here's a shot of the art view: And here's a shot of the dialog. And finally - the new home page: Whew! Done. By the way, I'll also point out another issue I had. When I first tested on a mobile device, the text was incredibly small. I got a nice tweet from @jquerymobile pointing out that in beta1, you need to include a new meta tag in your page templates: Adding that helped right away. Ok - that's it. I've included a zip below and you can play with this yourself via the uber Demo button. Enjoy. [Add Comment] [Subscribe to Comments]
http://www.raymondcamden.com/index.cfm/2011/7/13/jQuery-Mobile--adding-Local-Storage
crawl-003
refinedweb
2,539
67.35
. Once Dart Sass supports source maps, that will be the recommended way of locating the origin of generated selectors. Dart Sass is intended to eventually replace Ruby Sass as the canonical implementation of the Sass language. It has a number of advantages: It's fast. The Dart VM is highly optimized, and getting faster all the time (for the latest performance numbers, see perf.md). It's much faster than Ruby, and not too far away from C. It's portable. The Dart VM has no external dependencies and can compile applications into standalone snapshot files, so a fully-functional Dart Sass could be distributed as only three files (the VM, the snapshot, and a wrapper script). Dart can also be compiled to JavaScript, which would make it easy to distribute Sass through npm or other JS package managers. It's friendlier to contributors. Dart is substantially easier to learn than Ruby, and many Sass users in Google in particular are already familiar with it. More contributors translates to faster, more consistent development.. .5.0 You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:sass/s.
https://pub.dartlang.org/packages/sass/versions/1.5.0
CC-MAIN-2019-18
refinedweb
215
63.9
15 April 2008 16:49 [Source: ICIS news] TORONTO (ICIS news)--Increasing biodiesel blending to 7% from 5% will not harm car engines or pose health risks, German biofuels producers group VDB said on Tuesday. ?xml:namespace> VDB was responding to claims raised in a German parliamentary expert hearing last week that B7 would harm engines and pose a higher cancer risk. Multi-year studies proved B7 was safe, VDB said, adding that blending of up to 5% biodiesel content had not resulted in any harmful impact. In particular, B7 would not damage diesel particulate filters, it said. Earlier this month, ?xml:namespace> The
http://www.icis.com/Articles/2008/04/15/9116388/b7-wont-harm-cars-health-german-biofuels-group.html
CC-MAIN-2015-06
refinedweb
104
59.84
Question: I'm fairly new to Python and recursive functions as a whole, so pardon my ignorance. I am trying to implement a binary search tree in Python and have the following insert method (taken out of a class): def insert(self, key, root=None): '''Inserts a node in the tree''' if root == None: root = self.root if root.key == None: self._update(root, key) return 0 else: tmp = root if key > tmp.key: # we work with the right subtree self.insert(key, root=tmp.right) elif key < tmp.key: # we work with the left subtree self.insert(key, root=tmp.left) else: # key already exists return 0 I'm not sure if this is legible, but it traverses the tree until it gets to a None value and updates the node with the key to insert. Now, the method works nicely and correctly creates a BST from scratch. But there's a problem with the return statements, as it only returns 0 if there is no recursion performed. >>> bst.insert(10) 0 >>> bst.insert(15) >>> bst.root.right.key 15 >>> "Inserting" the root key again returns 0 (from line 15) the way it should. >>> bst.insert(10) 0 I can't figure out why this happens. If I put a print statement in line 6, it executes correctly, yet it just won't return anything past the first insertion. Why is this? (I'm pretty sure I'm missing some basic information regarding Python and recursion) Thanks for your help, Ivan P.S.: I've read that recursion is not the best way to implement a BST, so I'll look into other solutions, but I'd like to know the answer to this before moving on. Solution:1 On your recursive lines, you do not return anything. If you want it to return 0, you should replace them with lines like: return self.insert(key, root=tmp.left) instead of just self.insert(key, root=tmp.left) Solution:2 You are inside a function and want to return a value, what do you do? You write def function(): return value In your case you want to return the value returned by a function call, so you have to do. def function(): return another_function() However you do def function(): another_function() Why do you think that should work? Of course you use recursion but in such a case you should remember the Zen of Python which simply says: Special cases aren't special enough to break the rules. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-python-recursion-and-return.html
CC-MAIN-2018-43
refinedweb
438
73.27
You know you have to write unit tests. You have the tools, and your team is committed to the idea. Somehow, though, despite your best intentions, the tests just never get written. Or you write them, but never run them. They always fail anyway. For one reason or another, your project is resisting the will of the unit test. Usually, this means that your code needs a good solid refactoring, to make it more accepting of the way of the test. Many people have observed that unit testing fits the mold of Heisenberg's Uncertainty Principle which says, among other things, that at the quantum level, the act of observing an event changes the nature of that event. Unit tests often have the same effect, but it extends beyond the runtime event being tested. It affects the design of the code that is to be tested, as well. In this column, we'll examine 10 different ways you can change your .NET code to make it ready to be tested. By doing so, you will be creating a more loosely coupled, flexible and transparent architecture, which will benefit you not only in testing but in documentation, maintenance and, eventually, modification. With a little forethought and some careful implementation, the unit tests can drive more than the QA process; it can help you design a more robust application in the first place. Use Interfaces Those of us who came through the COM days with our registries intact remember that most strident of admonitions: "it's the interfaces, stupid." To write COM code, you had to employ interfaces as the primary coupling mechanism. For the un- (or under-)initiated, interfaces are lightweight constructs which define the publicly accessible method signatures exposed by a class; they contain no implementation details whatsoever. Instantiated objects can be referred to by references of the interface type, but the interface can never be instantiated itself. Here is a simple interface example: we'll define an interface for i Agents all have a name, and can attempt to accomplish a mission in a given amount of time. We'll implement two different kinds of agents, SecretAgents and AirlineAgents. Airline agents perform their mission by getting people on planes, and secret agents have clandestine rendezvous points. In addition, the SecretAgent, if in undercover mode, won't reveal her name. In projects based entirely on concrete types, you may often hear complaints about the amount of time it takes to write the unit tests. This is due to the fact that test cases are meant to exercise the methods of a type; as long as each class is defined only by its own instantiable self, then unit tests are not reusable. There is a one-to-one correlation between the classes and the test cases. Using interfaces, though, can allow you to reuse your unit tests. When you employ this architectural strategy, you are defining a common subset of available functionality that your concrete classes can implement. Unit tests, since they target methods on a type, can be written to exercise an implementation of a given interface. For every concrete type that implements that interface, the same test case can be used to exercise and verify the methods. To test these different agents, we will want to run tests against the public interface IAgent. Tests for accomplishMission will all be remarkably identical; test known values for timeLimit and check true or false on the return. We need to create a specific unit test for each concrete type of agent in our application, but we don't want to rewrite the test for accomplishMission each time. Instead, start by defining a base class for your unit test that can adequately test accomplishMission: Notice that the class is not marked with [TestFixture] though it does contain at least on [Test]. TestAgents will never be instantiated directly by the NUnit test runner because you have not marked it as a TestFixture, which is precisely the behavior you want. The test method itself relies on protected fields that will be filled in by the derived tests. The test for AirlineAgent is only testing the accomplishMission method, since AirlineAgent currently has no other functionality. SecretAgent, on the other hand, has the extra ability to not return its name if it is undercover. The TestSecretAgent class therefore has an extra [Test] method to cover this. The code that actually exercises accomplishMission, though, is written only once, and can be maintained and upgraded in a single place. Define a base test class To create a test case for NUnit, you need only define a class and mark it with the TestFixture attribute from the NUnit Framework assembly. This designation provides a marker for the test runner to identify your test cases and run them. It provides no other functionality for your class, and since it is only metadata, your class does not inherit any meaningful functionality or data from it. However, you will find very quickly that there are techniques and strategies you employ again and again for your unit tests. It might be that you have to perform a task to set up a test data store, and run it often to clean out cruft from earlier tests, or that you have to create or destroy session information to enable proper test values. These types of repetitive activities cry out to be centrally located so that you can eliminate the extra work of typing them again and again. In keeping with the principle of "don't repeat yourself', it is, in fact, imperative. The solution is to create a base test class that your specific tests can derive from. This base class will host a series of utility methods and data structures that you can call on from any individual test. In addition, if all (or even most) of your tests require some common code in setUp and tearDown, you can implement those methods on the new base class. When your individual tests require the functionality, they can call up the chain and invoke it. For example, with our agents application, perhaps we want our unit tests to be able to validate that the agent's name matches a certain pattern (no numbers, or if numbers, starts with "00", or something). We would make a base class for our tests that exposes an isValidNameString method: Simply have your other tests derive from this one. In the case of our example above, we'll have TestAgents derive from AgentTestBase, thus providing our common testing functionality to all our current tests. Another favorite technique of mine for testing .NET code is to make my mock objects be internal classes in the base test class. This is especially useful when I only have a few mock objects, and most of my tests can share the same versions. See "Mock the data access layer" below for an example of this. Do not decorate the base class as a TestFixture. This just clutters your test runs with ignored tests (since it won't have any actual [Test] methods on it). As much as feasible, make everything return a value This obviously isn't a hard-and-fast, all-or-nothing rule, but as much as possible, avoid void methods and sub procedures. Unit tests are easier to write when the primary value to test is the return from a method. When a unit test has to scurry off to the database or another object or a text file to verify a simple method, then not only have you created more work for your unit test but you are also testing more than just your business logic; you are testing the infrastructure that leads from your business logic to that external data store. You are additionally adding the exact same infrastructure into your unit test, and if there is a problem, it will be a problem in the test as well as the thing being tested. A method that returns the logical outcome of its processing can clearly enunciate its view of the world to its client, whether that be a user, other process or unit test. It may or may not reflect the true state of things, since obviously the full state of the application will probably be based on more than the code in this single method, but it will reflect what the method thinks is the state of things, which is what you should be unit testing. Testing the full application state is a job for integration testing. Say that AirlineAgents have to be able to submit a timecard at the end of the day. The business rules for the application state that the agent should create the timecard, then save it to the database. Such a method might normally be written as: To test that, you would have to go to the database and look up the timecard and verify its format. This is open to a number of problems, and isn't really testing the code in your method; it is testing a complex collection of activity, most of which won't have anything to do with the agent's ability to create the timecard. A better way to write this method is: Now, you can write a unit test that calls the method then verifies that the string returned matches the known state of the agent and the expected format of the document. You are not testing the database code, since that is orthogonal to the problem at hand. So what happens if your method has more than one logical return value? If your architecture already calls for this, you will already have devised an answer to the problem. But what if you are trying to convert a void method to one with a return value (or values) just for the benefit of your unit tests? In this case, you really have three options: Avoid option #2 as much as possible unless your architecture specifically calls for it. Using REF parameters adds complexity to your application, and your unit tests should not force you to add complexity to your code. All of the other suggestions given in this article actually reduce complexity, which is the key to testable code. Separate data access from business logic Following from the last suggestion, make sure that your data access code is separated into its own layer. You might be using direct ODBC calls, or typed datasets, or an O/R mapping tool like NHibernate, but whichever method you use, make sure it lives in its own namespace and object model. If your business objects are focused on the actual problem domain (rather than the common, and already solved, problem of storing data) then it is much easier to test whether or not you are solving the real problem. In addition, you can now implement a series of unit tests that exercise the data storage infrastructure for your application without jumping through hoops to ensure that the data you are storing is correctly formatted. Instead of writing the timecard directly to the database inside of AirlineAgent, as in our example above, you would write a new data access class that handles saving and loading timecards. The method submitTimeCard now becomes much cleaner: In addition, now you can write unit tests that exercise just the data access layer. Mock the data layer to test the business logic When you are writing the unit tests that target the domain logic, mock the data access layer to ensure the separation of concerns. Just because you have separated the data access code into is own namespace, if the business logic is inseparably dependent on it, you still have the same problem: testing the business logic is dependent on successful data access logic. You can either choose to create your own mock data access classes, or employ a mock object framework like nmock. We'll save mock-object frameworks for another column. Today, let's just create a mock version of TimeCardDA that throws exceptions when it is given a well-known input. This mock version of the data access defines some well-known error-inducing inputs. If the input is anything other than the defined error procuring values, then the methods will return something resembling a real value. When the well-known input is given, the method will error out. Make use of configuration If you have followed the advice given so far, then you have an architecture based on interface implementation, with a variety of concrete implementations of your .NET code. Multiple domain objects implementing the same interface, separated domain and persistence layers, and a variety of mock or test objects that can be used in place of the real thing. In order to make full use of all of this, you need to take advantage of the configuration abilities of .NET. Let's examine the case of the persistence layer. For normal use, your domain model will rely on the real data access layer to perform its data storage. During testing, though, you will want to use the mock persistence layer. If your domain model is tightly coupled to the real persistence layer, then you will not be able to easily replace it at test-time. Instead of making the domain model directly coupled to the persistence model, employ the indirection pattern. Create some kind of broker or router that your domain model relies on to provide concrete implementations of the persistence layer. This broker will look up the classes that are needed for the storage operations and create instances of those classes reflectively, based on the values in the configuration file. When the application is deployed, it will have a configuration file pointing to all of your real persistence classes. The test environment, though, should have its own copy of the configuration that points to your mock data object layer. For our agents application, we'll want to have a way to get the real version of the TimeCardDA in a deployment scenario, but the mock version for unit testing. First, let's add a configuration element to the .config file for our application: Next, we'll modify TimeCardDA so that you cannot instantiate it directly. Instead, we'll use the factory pattern (calling a static method on the class to return a new instance). The static method will look in the configuration file to determine whether to return the real or mock object. Finally, we need only change our code to use the new factory method instead of the direct constructor. Now, whenever we run the unit test suite, we make sure that the value of the TimeCardDA key in the config file is set to the mock object instead of the real, and our unit tests will test only the logic of the business model. You could, instead, implement a method or property on the domain objects that tell it whether or not to use the real mock object. This makes it very explicit from the test code's perspective what is being tested; nobody reading your tests will miss the fact that the data generated during the test will be sent off to some mock objects instead of the real thing. However, it means that your domain objects have to implement some code that is ONLY useful during testing, and this is usually a bad idea. Your domain objects should be devoted to the single purpose of solving your business problem; extraneous testing-specific code at best clutters the interface, and at worst, can have unintended ripple-effects throughout the class and possibly the rest of the code. Make Your Classes Do Only One Thing This is a fairly common design principle that deserves its own point. When you write a class, it is the domain model representation of some idea. Too often, we clutter these classes with code that is only tangentially related to the idea. In "Separate data access from business logic", we examined one of the many ways that programmers clutter their code with extraneous functionality. The rule of thumb is that there should only ever be one reason for you to modify a class. If you find yourself going back to the class to make changes again and again as different requirements change, then the class is probably overcrowded. When this happens, determine what the class was intended to do, and refactor everything else into one or more other classes. If our SecretAgent class was spending too much time in file i/o, or the AirlineAgent kept looking things up via LDAP, then we would want to remove that non-core logic and put it in another class dedicated to that kind of task. Have Domain Object Factories Often, the unit of functionality that you are testing is dependent on other parts of your domain model. In general, you will not want to use mock objects to impersonate the objects your functionality relies on. Mock objects are generally used to impersonate external objects from third-party libraries whose internal state you cannot control nor directly observe. Your domain objects might have very convoluted construction requirements. If the object needs more than a single call to a simple constructor to be initialized into a state that is useful for your test, you should think about creating a test factory for that class. The test factory should provide a static method for returning an instance of the class in a ready state for use in your tests. On top of that, it should provide some static constants defining known property values that can be used to verify the state during your tests. For example, we might need to create a series of SecretAgents for testing. We will want to create them in known states with known values. Here is a sample factory for SecretAgents: Whenever one of your tests needs to introduce a SecretAgent into the test, perhaps passing one or more into a method that operates on a collection of IAgents, then you can use the factory to create them. Moreover, when you are handed back a reference to an IAgent in the course of a test, you can compare its data values against known values on the test factory. Think carefully about packaging, assemblies and namespaces Eventually, almost all "enterprise level" applications will grow to the point that running the entire suite of unit tests becomes a severe burden on the development team. Since running unit tests often is one of the central tenets of agile development (and it really is a good idea) this problem can quickly lead to fewer runs of the tests (and at worst, fewer tests written). One way to avoid this problem is to plan your application for natural divisions of the codebase. Instead of allowing everything to be part of one monolithic assembly, the application should be built around the idea of multiple, interrelated assemblies. Your tests should follow the same architectural separation. When you work on a specific piece of the application, you can focus on running the tests associated with the assembly you are knee-deep in, and ignore the rest. Don't be afraid of multiply-nested namespaces. If your application naturally breaks down in a nested tree structure, then let it. The only strong rule for breaking your application up into multiple assemblies and namespaces is to be careful not to overly entwine them. You generally only want dependencies running one direction between any two packages. If classes in one assembly are dependent on classes in a second, the reverse should not be true. This will allow you to replace entire assemblies more easily (for instance, if all your persistence code lives in its own assembly, you could swap it out for a new assembly that targets a different database, or one composed entirely of mock objects). Pick a logging strategy early Even given the earlier recommendation to have your methods return values, sometimes you can't test everything you need to just by examining the return from a method. Since unit tests are usually run from outside your development environment, it makes step through debugging during testing difficult. In truth, most unit tests are run as batches anyways, and even if you could step into the code, you won't be sitting there and be able to. What you need is an external store of application state that you can examine and correlate back to the test results. You need a logging strategy. It is vital to implement your chosen logging strategy as early as possible in the development effort, so that you don't have to layer it back into existing code later. Beyond that, logging will be enormously beneficial to your application post-deployment. Since bugs are difficult enough to trace when you have a full development environment available to you, tracing bugs at the customer site is next to impossible. Detailed logging allows you, and your customer, to get a detailed look at application state in both a real-time and historical perspective. You may choose to use the built-in Trace mechanism in the FCL, or move to an external tool like log4net. Regardless, make sure you learn the basics of categorizing your messages, sorting by priority and, most importantly, routing them to different output stores. The console is a great place to get realtime logging information, but what happens when your unit tests run as a batch overnight? Where is the console tomorrow? You need to be able to store the log information in files or data tables for retrieval and examination later. Summary This is by no means an exhaustive treatment on the testability of applications. Entire articles can be written just about the testability of user interfaces and data access layers. Instead, these ten suggestions form a good starting point for re-examining the assumptions inherent in your design, and thinking through the decisions that will affect your code's testability. Since testability determines how well you can verify your application, it should follow that a testable application is a better application. Biography Justin Gehtland is a founding member of Relevance, LLC, a consultant group dedicated to elevating the practice of software development. He is the co-author of Windows Forms Programming in Visual Basic .NET (Addison Wesley, 2003) and Effective Visual Basic (Addison Wesley, 2001). Justin is an industry speaker, and instructor with DevelopMentor in the .NET curriculum..
http://searchwindevelopment.techtarget.com/tip/0,289483,sid8_gci1277483,00.html
crawl-002
refinedweb
3,733
57.71
Ask for Help - “Can you use Google Maps on an https page?” - Probably via an iframe. Is there a preferred way?! We! RSpec global before/after: In addition to Behaviour-scoped before and after method forms, Rspec also has global prepend_before, prepend_after, append_before, append_after methods.?. I was just converting some Test::Unit tests to Rspec, and these regexps were handy. In one file, they handled 51 out of 53 lines, saving my fingers a lot of work. Tests can take an infinite variety of formats, so these obviously won’t apply to everything, but they do illustrate how to use regexp substitution. This is using TextMate, your regexp implementation may vary… from -> to search string replace string def test_foo -> it "test_foo" do def (test_[a-z_]*) it "$1" do assert !foo -> foo.should_not be_true assert !(.*)$ $1.should_not be_true assert foo -> foo.should be_true assert (.*)$ $1.should be_true assert_equal foo, bar -> bar.should == foo assert_equal (.*), (.*)$ $2.should == $1. $ rails railsdi $ cd railsdi/ $ ls $ script/generate controller Sample $ # create development/test databases First, add a constant in boot.rb. Just ignore the warning to not modify boot.rb – it’s not talking about you. Put this at the beginning, right after the section that defines RAILS_ENV boot.rb REGISTRY = {} Set any values or objects you want in the registry: development.rb REGISTRY[:key] = "development_value" test.rb REGISTRY[:key] = "test_value" production.rb REGISTRY[:key] = "production_value" sample_controller_test.rb def test__can_redefine_registry_value REGISTRY[:key] = 'overridden_value' get :index assert_equal 'overridden_value', assigns['registry_value'] end D. I am a remote employee at Pivotal, so I do a lot of remote pairing, and I’m always trying new options. Here’s a quick writeup on what I’ve found to work best. This is specifically for working over a WAN. If you are on a LAN, other options will be better (and you should just get on the same machine as your pair anyway!). Remote pairing is pretty usable unless bandwidth is causing problems. CPU also makes a big difference – performance on an iMac is a lot better than on a Mac mini, especially the 1.6Mhz mini. I’m using the term “server” to mean the machine running the VNC server, and “client” to mean the machine running the VNC client. Mac Server: OSXVNC (Vine Server) with default settings. Turn on shared VNC connections if you want. Windows server: UltraVNC with the Video Driver Hook seems to work best; it’s almost as fast as Windows Remote Desktop, but it requires that you use the UltraVNC client, which is only available for Windows. However, sometimes you get screen redraw issues with the video driver hook. This seems to be due to network or CPU issues, because it works great most of the time on most machines. If this happens, you can fall back to the Tight protocol on the client. The “WinVNC Current User Properties” I use for the UltraVNC server are: Windows Client for server with a single monitor: If you are using the UltraVNC windows server with the video driver hook, then you should use the UltraVNC client, with the “Ultra” encoding and 256 colors, with CopyRect and Cache encoding enabled. If you are not using the UltraVNC video hook on your server, then UltraVNC is still a good client, with these settings: Windows Client for a server with a dual monitor and a client with a dual monitor: UltraVNC client has a bug where it scales, but will still not show any more width than one of the client’s monitors, even though you make the window bigger. RealVNC does not have this problem, so it’s probably a better client in this situation, even though it doesn’t allow configuration of all the above options like UltraVNC does (at least not from the GUI). Mac Client: Linux server and client: I’ve not been too impressed with the VNC linux servers or clients. They seem to be slow and crash a lot (both RealVNC and TightVNC). Your mileage may vary. Alternatives I’ve tried and found to be inferior: Other Notes: I’ve read that running VNC over a compressed SSH tunnel will help performance. However, I think with the latest VNC protocols, which already do compression, this doens’t make much of a difference. Summary: Most of these observations are from running different clients side-by-side. They are very subjective, because bandwidth and CPU are always affecting the performance. Let me know what your experiences are, and if you have any different ideas. Say you have two tags you want to diff, and one has a deleted directory. If you do an ‘svn diff’, you won’t see the deleted directory UNLESS you give the ‘–summarize’ option: svn diff --summarize
http://pivotallabs.com/author/chad/page/4/?page=2
CC-MAIN-2014-52
refinedweb
787
64.2
celGameServerManager Class ReferenceThis is an interface you should implement in order to manage the events from the server. More... #include <physicallayer/network.h> Detailed DescriptionThis is an interface you should implement in order to manage the events from the server. You can use it to: - authorize players to join the game, check the validity of the data of the players. - listen for changes in the state of the network connections to the clients. - catch the client events. - handle the detection of cheats. - close the game. Definition at line 602 of file network.h. Member Function Documentation A player is asking to join the game. Return true if the new player is accepted, false otherwise (for example, because the player has been banned). A text explaining the reason of a negative answer can be specified. A client event has been catched. An entity is controlled by a player and a problem was encountered while updating the data of this entity. - Parameters: - The state of the network connection to the player has changed. The game is finished and the server will be closed. You should delete here all entities and the physical layer. Check if the new data of a player are valid. It happens when iCelGameClient::UpdatePlayer has been called on the client side. - Returns: - the validated data of the player. The documentation for this class was generated from the following file: Generated for CEL: Crystal Entity Layer by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.0/classcelGameServerManager.html
CC-MAIN-2015-32
refinedweb
242
59.5
Jonathan Neal just announced that he has been working on a polyfill for CSS Container Queries. Let’s take a look at how it works … ~ 🤔 Container Queries? Container Queries allow authors to style elements according to the size or appearance of a container. For size based container queries this is similar to a @media query, except that it will evaluate against the size of a parent container instead of the viewport. For style based container queries you conditionally apply styles based on the calculated value of another CSS property. 🤔 Polyfill? A polyfill is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively — What is a Polyfill? ~ Update 2021.11.26: This polyfill does not play nice with the most recent Container Queries Syntax. Instead, please use the container-query-polyfill polyfill style; /* For browsers that support Container Queries */ --css-contain: layout inline-size style; /*, after which it’ll do its thing: <script src=""></script> If you want a local copy of CQFill, you can install it per NPM/Yarn. npm install cqfill import { cqfill } from "cqfill"; ⚠️ I’ve noticed that loading cqfill from Skypack doesn’t seem to work unfortunately …:
https://www.bram.us/2021/04/27/a-first-look-at-cqfill-a-polyfill-for-css-container-queries/
CC-MAIN-2022-05
refinedweb
201
50.16
Very recently my friend Hari wrote an article about operator overloading in ruby, and he asked me to implement the same program which he used in his article in python. His aim was to compare the two hottest languages of technology lovers. Though our example is very simple and straight forward, we consider this as a stepping stone for our comparative study. Below we are giving our example which compares the operator overloading for ‘+’ (addition). I expect you know about operator overloading. If you dont know, in simple words, it is just like giving special meaning to a language’s operators(‘+’, ‘-‘,’/’,’*’ etc) via new definition though class methods.Read more about operator overloading from here. In python, operator overloading is achieved via overriding python’s special methods from a python class. If you want to do operator overloading in python you should know about it’s special methods ( which is also called magic methods). Python has a number of special methods, which should be of the form __xxx__. The list is very big, but most commonly used ones are, __add__, __sub__, __repr__, __str__, __len__, __gt__, __lt__ etc. you will get the complete list from here. lets see how we can give special meaning to __add___. I hope you should be knowing that we cant add two objects, we can add only two numbers. In python you can use ‘+’ to append two strings too. But look how we can add special meaning to ‘+’ to add two objects. class Add(object): def __init__(self, val): self.val = val def __add__(self, obj): return self.val + obj.val a = Add(10) b = Add(20) print a + b as you expect this will print 30. In order to understand how this works, you can read the print statement like this. print a.__add__(b) a = Add(‘Hello’) b = Add(‘ World’) print a + b #This will print ‘Hello World’. The above examples are straight forward use of ‘+’ operator. We can further extend our __add__ method to intelligently add different data types. Normally python wont allow to add different datatypes. ( try some examples on your python shell) Here is our extended __add__ method. And see the examples. class Add(object): def __init__(self, val): self.val = val def __add__(self, obj): if isinstance(self.val, str) or isinstance(obj.val, str): return str(self.val) + str(obj.val) return self.val + obj.val a = Add(10) b = Add(20) print a + b #30 c = Add(‘Hello’) d = Add(‘ World’) print c + d # Hello world e = Add(5) f = Add(‘ is my number.’) print e + f # 5 is my number In the above definition of __add__ we are checking whether any of the value is a string, if then we will append the two values, otherwise add the values. Advertisements
https://jineshpaloor.wordpress.com/2012/12/11/operator-overloading-in-python/
CC-MAIN-2017-51
refinedweb
463
66.74
Reaction::UI::Controller::Root - Base component for the Root Controller package MyApp::Controller::Root; use base 'Reaction::UI::Controller::Root'; __PACKAGE__->config( view_name => 'Site', window_title => 'Reaction Test App', namespace => '' ); # Create UI elements: $c->self->push_viewport('Reaction::UI::ViewPort', %args); # Access the window title in a template: [% window.title %] Using this module as a base component for your Catalyst Root Controller provides automatic creation of a Reaction::UI::Window object containing an empty Reaction::UI::FocusStack for your UI elements. The stack is also resolved and rendered for you in the end action. At the begin of each request, the Window object is created using the configured "view_name", "content_type" and "window_title". These thus should be directly changed on the stashed window object at runtime, if needed. Set or retrieve the classname of the view used to render the UI. Can also be set by a call to config. Defaults to 'XHTML'. Set or retrieve the content type of the page created. Can also be set by a call to config or in a config file. Defaults to 'text/html'. Set or retrieve the title of the page created. Can also be set by a call to config or in a config file. No default. Stuffs a new Reaction::UI::Window object into the stash, using the "view_name" and "content_type" provided in the configuration. Make sure you call this base begin action if writing your own. Draws the UI via the "flush" in Reaction::UI::Window method. Sets $c->res (the Catalyst::Response) body, status and content type to output a 404 (File not found) error. Sets $c->res (the Catalyst::Response) body, status and content type to output a 403 (Forbidden) error. See Reaction::Class for authors. See Reaction::Class for the license.
http://search.cpan.org/~arcanez/Reaction/lib/Reaction/UI/Controller/Root.pm
CC-MAIN-2015-06
refinedweb
292
58.18
Simple Library for writing CGI programs. See for the CGI specification. This version of the library is for systems with version 2.0 or greater of the network package. This includes GHC 6.6 and later. For older systems, see Based on the original Haskell binding for CGI: Original Version by Erik Meijer mailto:erik@cs.ruu.nl. Further hacked on by Sven Panne mailto:sven.panne@aedion.de. Further hacking by Andy Gill mailto:andy@galconn.com. A new, hopefully more flexible, interface and support for file uploads by Bjorn Bringert mailto:bjorn@bringert.net. Here is a simple example, including error handling (not that there is much that can go wrong with Hello World): import Network.CGI cgiMain :: CGI CGIResult cgiMain = output "Hello World!" main :: IO () main = runCGI (handleErrors cgiMain) Catches any exception thrown by the given CGI action, returns an error page with a 500 Internal Server Error, showing the exception information, and logs the error. Typical usage: cgiMain :: CGI CGIResult cgiMain = ... main :: IO () main = runCGI (handleErrors cgiMain) Add a response header. Example: setHeader "Content-type" "text/plain" Get the value of an input variable, for example from a form. If the variable has multiple values, the first one is returned. Example: query <- getInput "query" Get all the values of an input variable, for example from a form. This can be used to get all the values from form controls which allow multiple values to be selected. Example: vals <- getMultiInput "my_checkboxes" Get the value of a CGI environment variable. Example: remoteAddr <- getVar "REMOTE_ADDR" The.
http://hackage.haskell.org/package/cgi-3001.1.4/docs/Network-CGI.html
CC-MAIN-2016-50
refinedweb
257
51.55
21 April 2009 17:54 [Source: ICIS news] By Andy Brice ?xml:namespace> Demand remains strong for health foods and dietary supplements. And, if anything, products promoting health and well-being are even more of a priority for money-conscious consumers, according to Philipp Siebrecht, global business manager at Swiss-based ingredient supplier DSM Nutritional Products. A year after the launch of DSM’s ResVida brand in March 2008, the high-purity form of the natural ingredient resveratrol is selling strongly and Siebrecht is optimistic about future growth in the market. He points specifically to consumers from countries such as the A relatively small investment, said Siebrecht, could help them save them money in the longer term. Resveratrol, a natural ingredient found in red wine, has been lauded as having significant health benefits. These include helping to minimise the risks of heart disease and improving locomotor skills – the basis of human movement – and preserving learning abilities and endurance. Resveratrol occurs naturally in a number of plants, including grapes, mulberries, peanuts, a Chinese plant called giant knotweed and white hellebore. “A lot of people know about the benefits of red wine when drunk in modest quantities,” said Siebrecht. “DSM managed to extract that molecule and we now supply it to the functional food industry, who adds it to their dietary supplements or beverages and food products.” One way to increase life expectancy is to slightly reduce calorie intake, said Siebrecht. Researchers have found that low doses of resveratrol in the diet of mice mimic this effect. DSM has enhanced the qualities of resveratrol and provides a 99% pure form. A dose of 30-200mg each day could have a positive effect on health, Siebrecht said, with 30mg as the equivalent of five or six bottles of wine – but without the obvious health implications. DSM submitted its novel food application for the EU-wide approval of ResVida in December 2008. The company expected the application to be approved by the end of 2010. Click here to listen to the full interview with Philipp Siebrecht
http://www.icis.com/Articles/2009/04/21/9209900/optimism-for-nutraceuticals-sector-despite-downturn.html
CC-MAIN-2014-10
refinedweb
342
51.58
V2 -> V3:+ rebase to 23-mm1 atop RvR's split lru series [no change]+ fix function return types [void -> int] to fix build when not configured.New in V2. Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>Signed-off-by: Rik van Riel <riel@redhat.com>Index: linux-2.6.25-rc3-mm1/mm/mlock.c===================================================================--- linux-2.6.25-rc3-mm1.orig/mm/mlock.c 2008-03-04 16:19:46.000000000 -0500+++ linux-2.6.25-rc3-mm1/mm/mlock.c 2008-03-04 17:29:19.000000000 -0500@@ -199,6 +199,37 @@ int __mlock_vma_pages_range(struct vm_ar return ret; } +/**+ * mlock_vma_pages_range - lock the pages of a VMA in memory+ * @vma: vm area to mlock into memory+ * @start: start address in @vma of range to mlock,+ * @end: end address in @vma of range+ *+ * Called with current->mm->mmap_sem held write locked. Downgrade to read+ * for faulting in pages. This can take a looong time for large segments.+ *+ * We need to restore the mmap_sem to write locked because our callers'+ * callers expect this. However, because the mmap could have changed+ * [in a multi-threaded process], we need to recheck.+ */+int mlock_vma_pages_range(struct vm_area_struct *vma,+ unsigned long start, unsigned long end)+{+ struct mm_struct *mm = vma->vm_mm;++ downgrade_write(&mm->mmap_sem);+ _ 0;+}+ #else /* CONFIG_NORECLAIM_MLOCK */ /*@@ -265,14 +296,38 @@ success: mm->locked_vm += nr_pages; /*- * vm_flags is protected by the mmap_sem held in write mode.+ * vm_flags is protected by the mmap_sem held for write. * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, __mlock_vma_pages_range will bring it back. */ vma->vm_flags = newflags; + /*+ * mmap_sem is currently held for write. If we're locking pages,+ * downgrade the write lock to a read lock so that other faults,+ * mmap scans, ... while we fault in all pages.+ */+ if (lock)+ downgrade_write(&mm->mmap_sem);+ __mlock_vma_pages_range(vma, start, end); + if (lock) {+ /*+ *;+ }+ out: if (ret == -ENOMEM) ret = -EAGAIN;Index: linux-2.6.25-rc3-mm1/mm/internal.h===================================================================--- linux-2.6.25-rc3-mm1.orig/mm/internal.h 2008-03-04 16:19:46.000000000 -0500+++ linux-2.6.25-rc3-mm1/mm/internal.h 2008-03-04 17:29:19.000000000 -0500@@ -61,24 +61,21 @@ extern int __mlock_vma_pages_range(struc /* * mlock all pages in this vma range. For mmap()/mremap()/... */-static inline void mlock_vma_pages_range(struct vm_area_struct *vma,- unsigned long start, unsigned long end)-{- __mlock_vma_pages_range(vma, start, end);-}+extern int mlock_vma_pages_range(struct vm_area_struct *vma,+ unsigned long start, unsigned long end); /* * munlock range of pages. For munmap() and exit(). * Always called to operate on a full vma that is being unmapped. */-static inline void munlock_vma_pages_range(struct vm_area_struct *vma,+static inline int munlock_vma_pages_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { // TODO: verify my assumption. Should we just drop the start/end args? VM_BUG_ON(start != vma->vm_start || end != vma->vm_end); vma->vm_flags &= ~VM_LOCKED;- __mlock_vma_pages_range(vma, start, end);+ return __mlock_vma_pages_range(vma, start, end); } extern void clear_page_mlock(struct page *page);@@ -90,10 +87,10 @@ static inline int is_mlocked_vma(struct } static inline void clear_page_mlock(struct page *page) { } static inline void mlock_vma_page(struct page *page) { }-static inline void mlock_vma_pages_range(struct vm_area_struct *vma,- unsigned long start, unsigned long end) { }-static inline void munlock_vma_pages_range(struct vm_area_struct *vma,- unsigned long start, unsigned long end) { }+static inline int mlock_vma_pages_range(struct vm_area_struct *vma,+ unsigned long start, unsigned long end) { return 0; }+static inline int munlock_vma_pages_range(struct vm_area_struct *vma,+ unsigned long start, unsigned long end) { return 0; } #endif /* CONFIG_NORECLAIM_MLOCK */ Index: linux-2.6.25-rc3-mm1/mm/mmap.c===================================================================--- linux-2.6.25-rc3-mm1.orig/mm/mmap.c 2008-03-04 17:29:19.000000000 -0500+++ linux-2.6.25-rc3-mm1/mm/mmap.c 2008-03-04 17:30:00.000000000 -0500@@ -2007,8 +2007,9 @@ unsigned long do_brk(unsigned long addr, return -ENOMEM; /* Can we just expand an old private anonymous mapping? */- if (vma_merge(mm, prev, addr, addr + len, flags,- NULL, NULL, pgoff, NULL))+ vma = vma_merge(mm, prev, addr, addr + len, flags,+ NULL, NULL, pgoff, NULL);+ if (vma) goto out; /*-- All Rights Reversed
http://lkml.org/lkml/2008/3/4/509
CC-MAIN-2016-26
refinedweb
655
57.06
Recently, With shipping SQL Server 2005, we heard from customer feedback about suffering make successful remote all you need is: CONN_STRING = "Server=.\\SQLEXPRESS;Initial Catalog=master;Integrated Security=SSPI"; You must allow Remote Connections too: Start->Programs->Sql Server 2005->Configuration Tools->SQL Server Surface Area Configuration. Click "Surface Area Configuration for Services and Connections". Choose "Database Engine->Remote Connections", select "Local and Remote Connections", Apply, OK Can we change the name of SQl server Express, at the time of installation, from default machine name to any other name. IS this Possible, please help it is very urgent. By default, SQL Express was installed as a named instance, and the fixed instance name is "sqlexpress", hence when you make connection, you need specify" Data Source = <machinename>\sqlexpress" in your connection string. Actually im trying to say, whether it is Possible to change <machinename> at time of installing Sql express. As apps made in sql server2K5 express, when they are distributed to clients we have to change the machine name in conn string. Is there any solution fr this. I'm having a problem connecting with a Java application but I CAN connect using my .Net application - the user name and password are the same for both. The error I get is: com.microsoft.sqlserver.jdbc.SQLServerException: Cannot open database "CORNERS" requested by the login. The login failed. An interesing note - I get the same message if the database is not running. SQL Server Express 2005 is installed in mixed mode. Here is my connection string in the .Net appplication: <add key="connectString" value="Server=(local);UID=sa;PWD=myPasswd;Database=CORNERS" />. These are my values in my Java app web.xml - <init-param> <param-name>DBDriver</param-name> <param-value>com.microsoft.sqlserver.jdbc.SQLServerDriver</param-value> </init-param> <param-name>DBURL</param-name> <param-value>jdbc:sqlserver://localhost\sqlexpress:1055;databaseName=CORNERS</param-value> <param-name>DBUser</param-name> <param-value>sa</param-value> <param-name>DBPwd</param-name> <param-value>myPasswd</param-value> </init-param>. And yes, the port is 1055 - I checked to find it. I am using Microsoft SQL Server 2005 JDBC Driver 1.0 (sqljdbc_1.0.809.102). Does anyone have any idea what is wrong so that the login fails in the Java application but works in the .Net application? Hi adisciullo , I'm afraid that our team is not very familiar with the JDBC driver. I suggest that you post your question on the SQL Server Data Access forum as members of our JDBC team normally monitor it for JDBC-related questions. Il-Sung. Can you get a trace? Also, can you try our 1.1 driver? 1.1 driver has better tracing. Hi, I am having connection problem with SQL Server Express 2005. The problem occurs everyday after 6pm (after working hour). The error messenge is: Microsoft SQL Server Login -------------------------- Connection failed SQLState 01000 SQL Server Error: 10060 [microsoft][ODBC SQL Server Driver]{TCP/IP Sockets]Connection Open (connect) SQL State: 08001 SQL Server Error: 17 [microsoft][ODBC SQL Server Driver]{TCP/IP Sockets]SQL Server does not exist or access denied. All computers which connect to the server will get this error messenge. I have been working on this for few days but without any luck. I would be grateful if someone could help. Thank you. Note: The computers are working fine before 6pm. The error message indicates the client cannot open a socket to the sql server due to timeout. 10060 is socket error code for timeout. To verify this you could use telnet and attempt to open a connection to port 1433 from some client machine -> telnet 123.123.123.123 1433 where 123.123.123.123 is IP of SQL Server. I would check all network hardware between clients and server, it would be a firewall or router or application level firewall blocking traffic most likely. Hello there, I have a connection problem with sql server express but i believe you can help me with this. First, I tried your Remote connection string above: and it works but somehow when i use my own database since your using the master, it ask for Login. Heres the error: Microsoft SQL Native Client error '80004005' Cannot open database <mydatabase> requested by the login. The login failed. Hi, Junifer You need to grant dbaccess to your database for your login credential. 1) Assume you were using NT login, windows authentication, you can use ManagementStudio, connect to sqlexpress, go to security, add login, choose the account, and choose default database = mydatabase. 2) If you were using SQL login, you can go to security, found the account, click properties of it, then make sure it has access to your own database. good luck! Ming. Hey Ming, Thank you for the reply, but before i receive your reply, i have it working already simply by openning the Management Studio Express, right click on mydatabase then properties. I click on Permission and under users or roles i added two objects guest and public then grant them few permission under Explicit Permission. But i'm just wondering about security. Is my method safe/advisable? I will also try your suggestion and i know it will work. Hi, good thread. I've got SQLExpress running on a n XP server and a client. I can connect to my DB instance no problem from the server by name(not localhost)using SQL Server Authentication. I get an error 10060 on the client using the same strings in Studio Express under Connect To Server on the client. I've tried a multitude of recommendations. If I try to telnet to the port that the ERRLOG says the service is running on from the server, it connects. If I try it from the client, it says 'Could not open connection to the host'. I'm not sure what to tweak to get this to go. Thanks. ahh....found it. check out the help screen under the TCP/IP properties screen under Protocols for SQLEXPRESS. short end is that if you are using a firewall you must use a static port. 1433 is recommended. This and all the rest of the above stuff. You also have to open the port in the windows firewall on both machines. On vista when i start application as an administrator it create/delete data but when i run app in standard user mode it does not perform any operation with sql express... I've got a Windows 2003 Server, where we just installed SQL Server Express 2005 and IIS on the same machine. Our application (classic ASP) errors with "Provider cannot be found. It may not be properly installed. " when trying to connect to the database with an anonymous connection (IUSR account). Oddly, it connects successfully if we turn off anonymous users. Any ideas? Hi, Joel What if you turn on "anonymous" in IIS again, and try to use osql.exe to connec to your express, see whether the same error displayed. What if you turn off anoymous connection, use osql.exe, what happens? What is your connection string? The error you saw should not be related to anoymous users configuration but might there is exception. Thanks! Hi Ming, Here's the connection string that I was trying to use: var connectionString = "Provider=SQLNCLI;Server=WEB01\\SQLEXPRESS;Database=<databasename>;UID=<username>;PWD=<pwd>;"; This one was failing for IUSR connections, though it seems to work okay on our other development machines. This connection string seems to work okay: var connectionString = "DRIVER={SQL Native Client};Server=WEB01\\SQLEXPRESS;Database=<databasename>;UID=<username>;PWD=<pwd>;"; What's technically the difference between these connection strings? Any idea of why one would work for IUSR and the other would not? Hi, JOel The first one is using OLEDB, the latter is using ODBC driver. So, from the error message, it was probably caused by your OLEDB provider was not correctly installed or you specify the wrong one. 1) Which client provider were you using, namely, when you create your client application, did you configure any provider? what it is? 2) what if you modify the first one by replacing "Provider =SQLOLEDB" whether it works? LMK if you have further question. How can I access SQL Server from the internet ? I have a router with a public IP I have SQL Server 2005 Express installed on a LAN connected machine (192.168.1.64) I want to access this database from the internet. How can I do ? Firewall is already opened for the 1433 TCP Port Thanks Hi, RJ You can use ASP or write ASP.NET application. Following info and example are good start: Good Luck! I also wish to connect to the SQLExpress through the Internet connection. However, not using ASP.Net but an .Net Windows Application. I have a Static IP on the server and I supposed I can write connection string as normal remote connection over the LAN? I attempted but failed to connect. My Server not running IIS, does it matter? (We disregard firewall issue) Thanks. Adrian hi there, im using vb 2005 and im tring to acess a data base on a server via pocket pc application. i got an error on the connection string ! Failure to open SQL Server with given connect string. [ connect string = Provider=SQLOLEDB.1;Server=duros-mobile,1433\SQLEXPRESS;Initial Catalog=praia;Integrated Security=SSPI; ] can you help me? thanks! I am trying to connect to a SQL Server DB from a pocket PC. I use the following connection string on the Pocket PC. "Server=192.168.1.25,1433;Initial Catalog=exilog;User ID=nybc;Password=nybc" I can connect to the server using the IP, port and credentials from SQL Server Management Studio Express. But when I run the application in the emulator, I get a "SQL Server does not exist or Access is Denied" SQL Exception. This has been killing me for the past 5 days. Can someone please help me? Cheers, Sampath Hi, John 1433 is the reserved port for sql default instance, your express was installed as a named instance, and by default it is using dynamic tcp port unless you specified. The solution could be [ connect string = Provider=SQLOLEDB.1;Server=duros-mobile\SQLEXPRESS;Initial Catalog=praia;Integrated Security=SSPI; ] and make sure sqlbrowser service is started. Hi, Sampath The error looks like you were using MDAC, and the message is too general to identify your particular problem. Hence, I suggest, you modified your connection string to [Driver={SQL Native Client};Server=192.168.1.25\sqlexpress;Initial Catalog=exilog;User ID=nybc;Password=nybc" I assume you were connecting to sql express instance which is a named instance, and by default it is not using port 1433 which reserved by default instance, but a dynamic port. Or you can take a look at server errorlog to make sure your sql express was listening on tcp and find the port number, then replace 1433 in your connection string w/ the true number. Also, "Driver={SQL Native Client}" requires you must install SQL 2005 Native Client which is part of the 2K5 installation, this provider will populate more detail error info to help you figure out the root cause of connection failure. This error was most frequently hitted by our customers, and in this post, give a brief summary of troubleshooting I am confused. I have 2 windows 2000 sp4 machines that I want to connect to a SQL Express database. Initially I had problems with both machines then I updated MDAC on them and now one of them connects and the other gets the following error: Connection Failed: SQLState: '01000' SQL Server Error: 10061 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen (Connect()). Connection failed: SQLState: '08001' [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]SQL Server does not exist or access denied. I know that everything is configured ok on the SQL Express machine as one of the win2000 machines can connect. The only difference I can spot between the 2 machines is the version of the SQL ODBC driver. The one that works is 2000.85.1022.00 and the one that does not work is 2000.85.1064.00. I have no idea why they would have different drivers as the same version on MDAC was installed on both machines. Any help is much appreciated. ConnectionOpen (Connect()) means you cannot open a socket to the remote SQL Server. I don't think this is due to the MDAC driver version. Most likely the tcp-ip port is blocked by Windows firewall and the "good" machine is actually connecting over named pipes and not sockets. So if you go through the steps of adding the SQL Express instance to firewall exclusion list everything should work. See this article for details -> both machines are using TCP/IP Then they should both work the same in theory. Could be issues such as IPsec blocking sockets. IPSec typically blocks machines that are outside of a domain from connecting to machines that are inside a domain. Simple way to verify is go to the SQL Express machine and find the ERRORLOG file for the SQL Express instance. You can locate the ERRORLOG file by connecting to the SQL Express instance and running the following SQL statement: select serverproperty('ERRORLOGFILENAME') This will tell you the location of the error log file for the SQL Express instance. Open this file in notepad and search for the following line: 2007-04-02 15:50:30.71 Server Server is listening on [ 'any' <ipv4> 5555]. Once you know the port, then go to each client and try the following from a command prompt: telnet myServer 5555 If you see this go to a blank screen, then the port is open. Press Ctrl + ] to break out of telnet. "" i hava .net application with database in Access but when i run it it asks for sql server connection why Dear Ming, Thanks for all the helpful comments - nevertheless I run into the following strange situation: I am writing a C# application in VS2005 which connects to a SQLEXPRESS database on a different server. I am using the following connection string: "Data Source=myServer\SQLEXPRESS;Initial Catalog=myDB;Integrated Security=True". When running it from the IDE (either Debug or Release mode), the connection works perfectly - however, when starting the application directly (double-clicking myApp.exe), the connection fails with "error: 26 - Error Locating Server/Instance Specified". Do you have any idea? Best regards, Adrian? I am a fresher of SQLServer and just started with the Personal Web Site Starter Kit. The following error was showed in the explore: ========================================= Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed. And here is the log of SSEUtil.exe C:\>SSEUtil.exe -l I have confirmed both of the SQL Server(SQLEXPRESS) and the SQL Server Browser running well. Also, I can even creat a new project (whether a web site or a windows app) and connect to my testDB with the SQLDataSource component by "./sqlexpress" connection string. What's the different of these two type of DB? How can I get through the MS's example? I have visual basic .net express and sql express. I have created my db's so I know that I can use this instance. I have enabled named pipes, remote connections, tcp/ip for both client protocols & protocols for sqlserverexpress. I have done everything that is listed on this forum to try and resolve my connectivity issues. I cannot connect from my webapp locally to my instance. I get the generic see no errors in the error log. I have made sure my connection string says computername\sqlexpress. i can put anything in it and I still get the same error. I am going out of my mind. Please help. Hi, Neha Can you check out following two blogs to resolve your problem. Hi all I am having two applications which are using same database.I have upgraded databse from sql server 2000 to 2005 on xp.one application is in vb6.0 which is connecting with the database but the application in asp is giving error as "Microsoft SQL Native Client (0x80004005) Cannot open database "HVSP" requested by the login. The login failed. /NewVspSql/logon.asp, line 41" My connection string for vb6.0 application is as follows "Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;User ID=HVSP;Password=;Initial Catalog=HVSP;Data Source=INFO15\SQLEXPRESS" Meenal Please refer the following blog to check out what is potential cause of the 'Login Failed'. Ming, I have a Vista customer with an odd problem. If his router is on he does not get a connection to the SQL database or gets one so slow that it is unusable. If he turns off the router it works right. If he turns on the router after making a sucessful connection it works right. I am presuming that this a connection timeout problem and we will try a longer setting there. My question to you is what SQL 2005 is doing that conflicts with the router? He is running the application and the SQL Express database on a single machine and it is the only machine connected to the router. In trying to fix this we have added sqlservr, sqlbrowser and port 1433 to the firewall exceptions and set scope to his subnet only. The router manufacturer say the router should not be doing anything to calls limited to the subnet. TIA for any ideas you have on this problem. Hi, I have this issue with the connectivity (timeout expired) in my .asp code. Microsoft OLE DB Provider for SQL Server error '80004005' Timeout expired /SQLDB/resultpage.asp, line 49 The connection string is: set conn = Server.CreateObject("ADODB.Connection") conn.open "Provider=SQLOLEDB;Data Source=ServerName;Initial Catalog=DBName;UID=sa;PWD=password" Environment: 1. MS Windows Server 2003, running SQL Server 2005 SP1, DHCP 2. MS Windows 2000 Server SP4, hosting IIS, connection type = TCP-IP Tried few links, but it does not seem to work. Anyone can help? is it possible to interactively query SQLBROWSER (via command line interface) to retrieve the list of sql servers broadcasting their existence? i am not a win32 programmer and do not have a win32 development environment. my only programming experience is with platform neutral perl. any help would be appreciated. net start indicates that i have the sqlbrowser service running. control panels - admin tools - services - also shows that it is running. but i dont know if a command line interface is available... or might someone have a small stand alone executable that i can use for this purpose? vista stand alone computer. sqlexpress default installation (shared memory enabled). having trouble connecting. error is '[dbnetlib open] sql server does not exist or access denied'. this error seems to be a bit contradictory to me. information ive read about sqlexpress states that 'dbnetlib' is only to be used for tcp\ip connectivity to sqlexpress server. not for shared memory. if sqlexpress is installed locally, and client is also local, and sqlexpress is configured to use shared memory for connectivity - why would dbnetlib be utilized at all? is my connect string not ideally configured to use shared memory? my connect string presently looks like; Provider=SQLOLEDB.1; Integrated Security=SSPI; Initial Catalog=MYDB; Use Encryption for Data=False i dont know how to step by step troubleshoot a shared memory connection. I had the following error when connecting ODBC to sqlserver on a different machine "does not exist or access is denied". After reading loads of posts and trying this, that and the other for weekades I came up with what worked for me (Phew!), it might work for you. In "Local Users and Groups" I added myself (power user) to the "SQLServer2005MSSQLServerADHelperUser$INSTANCE", "SQLServer2005MSSQLUser$SERVER$INSTANCE" and "SQLServer2005SQLBrowserUser$SERVER" groups. BINGO IS HIS NAME-O Obviously obvious from the error message isn't it just..... HTHUMF I think I got a solution... the error is in the connection string and in the configuration... if you'll use the IP instead of (loca), localhost, <machinename>\SQLEXPRESS, it will work. So use 127.0.0.1... the server will love it... and one more thing... at Protocols (SQL Server Configuration) at TCP/IP, set at IPAll - TCP Dymanic ports 1433 and one more... of course... set trusted connection to true... (you should allow remote connections too).. So, the connection string is : "Provider=SQLNCLI;Server=127.0.0.1;Database=database;Trusted_Connection=yes;" Hope this helps someone.. P.S. : from my point of view.. I think this solution is a stupid one as long as local = localhost = 127.0.0.1... I guess for the SQLExpress.... local = localhost != 127.0.0.1 I do not get the error "cannot find odbc" when I run my ASP.NET website with debugging (using the service on a random port #). But when I try the site using it says it cannot find the ODBC connection. Must be something to do with my IIS config... can't find anything that pertains to this however.. Hi I have problem in connecting sql express sp2 in vista. the error done:enabling tcp/ip and named pipeline for sql server. please help me out. Hi There, I have been going around in circles with this problem. Can you please throw some light on why we keep getting this error. I am using the following connection string. =============================================== Source=.\SQLEXPRESS;AttachDbFilename=C:\PROGRAM FILES\ATHLETIC GATEWAY AS\OLYMPIATOPPENS TRENINGSDAGBOK V2.0\DATA\OLTDDB.MDF;Integrated Security=True;Connect Timeout=30;User Instance=True And i keep getting this error almost all the time. Message: Cannot open user default database. Login failed. Login failed for user 'PLAIN06\adminuser'. One reason for this is that, we have two applications accessing the DB. One is a [WEB] based application [Running on Cassini Web server]. The other one is a [WINDOWS] based application. I verified my code to check for any connections that might have been left open after a open connection call. But all open connection call is properly followed by a close connection call. I hope i am right in understand this that the connections are closed once opened using a close connection component method call. Or does it take time for a closure of the same like in case of any connection pooling. Specific to my case, i have to run the [Windows] based application before running the [WEB] application. The Windows application does a synchronization process first. Again for resynchronization i have to close the [WEB] application and also close the Cassini web server process physically to once again allow my [Windows] based application to connect to the db. I agree that considering the nature of SQL Server User Instance i will be able to have a single user connect to the database at any given point in time. However having said this is there a way to verify if there is any open connection to this database from my .net code. If so please let me know of the same. Thanks & Regards Sougandh Pavithran <a href= >tallulah river campground</a> Stdimmen dadrin übersein, dasss untear Bersücksichtigung dser Art der SQaL Sserver-Instanaz i Usaer in daer Lasge, einsen einzigen Benutzer die Verbindsung zur Datsenbank zu einem beliebigen Punkt in der Zeit. Allerdings haben gesagt, diese gibt es auch einen Weg um zu prüfen, ob es eine offene Verbindung zu dieser Datenbank aus meiner. Net-Code Ddieses Versagefn verurssacht wersden köannen, durch die Tatsache, dass im Rahmen der Standard-SQL-Server-Einstellungen ist es nicht möglich, Remote-Verbindungen. (Provider: Named Pipes Provider, error: 40 - konnte nicht geöffnet werden eine Verbindung zu SQL-Server) Icah habde getdan: Sfo ködnnen dide TCdP / IdP und den Namend für sdie SQL Servers-Piapeline Sqdlexdpress lodkal instdalliert idst, undd der Kundde isdt audch vord Odrt, udnd sqlexpresss so konfidguriert ist, verwenden desn gemeinsamen Speicher für die Konnektivität - warum sollte dbnetlib auf allen genutzt werden? String ist meine Verbindung nicht optimal konfiguriert, um mit gemeinsam genutzten msdn博客.. PingBack from PingBack from PingBack from PingBack from Hi I have problem in connecting sql Server in vista. i was develop a desktop application on visual studio. Setup contains Sqlserver and dotnet framework . Its works fine when i was deploy on Xp. but it did not work on vista and throwing exception that is given below. hey my database name is gssportal\sqlexpress but i'm unable to connect from my tomcat. It is giving Login failed for user .. Even i chaged my database to Mixed Mode(Windows as well as SQL Authentication) please help me. Hey i configured my data base in tomcat in this way <ResourceParams name="MSSQL"> <parameter> <name>url</name> <value>jdbc:sqlserver://gssportal\sqlexpress:1433;DatabaseName=webexpenses3 </value> </parameter> <name>password</name> <value>sql@gss</value> <name>maxActive</name> <value>4</value> <name>maxWait</name> <value>5000</value> <name>driverClassName</name> <value>com.microsoft.sqlserver.jdbc.SQLServerDriver</value> <name>username</name> <value>sa</value> <name>maxIdle</name> <value>2</value> </ResourceParams> I'm able to login from sql server management studio but when i try to connect form tomcat it is giving this error org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFact ory (Login failed for user 'sa'.) at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSou rce.java:855) at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource .java:540) at com.gssamerica.expensereporting.ui.common.SQLManager.getConnection(SQ LManager.java:95) at com.gssamerica.expensereporting.business.dao.CacheHome.getExpenseIds( CacheHome.java:46) at com.gssamerica.expensereporting.business.listener.LookupCacheListener .cacheExpenseId(LookupCacheListener.java:183) .contextInitialized(LookupCacheListener.java:55) at org.apache.catalina.core.StandardContext.listenerStart(StandardContex t.java:3827) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4 343) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase .java:823) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:80 7) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:595) at org.apache.catalina.core.StandardHostDeployer.addChild(StandardHostDe ployer.java:903) at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.apache.commons.beanutils.MethodUtils.invokeMethod(MethodUtils.jav a:216) at org.apache.commons.digester.SetNextRule.end(SetNextRule.java:256) at org.apache.commons.digester.Rule.end(Rule.java:276) at org.apache.commons.digester.Digester.endElement(Digester.java:1058) at org.apache.catalina.util.CatalinaDigester.endElement(CatalinaDigester .java:76) at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source ) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEndElement( Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContent Dispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Un known67) at org.apache.catalina.core.StandardHostDeployer.install(StandardHostDep loyer.java:488) at org.apache.catalina.core.StandardHost.install(StandardHost.java:863) at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.j ava:483) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:427 at org.apache.catalina.startup.HostConfig.checkContextLastModified(HostC onfig.java:800) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1085) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :327) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl eSupport.java:119) at org.apache.catalina.core.StandardHost.backgroundProcess(StandardHost. java:800) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.p rocessChildren(ContainerBase.java:1619) rocessChildren(ContainerBase.java:1628) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.r un(ContainerBase.java:1608) at java.lang.Thread.run(Thread.java:595) Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for use r 'sa'. So urce) at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(Unknown Source at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(Unknown S ource) at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecu te(Unknown Source) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(Unknown Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(Unkno wn Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(Unknow n Source) at com.microsoft.sqlserver.jdbc.SQLServerConnection.loginWithoutFailover at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(Unknown Sour ce) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(Unknown Source) at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(Driv erConnectionFactory.java:37) at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(Poolable ConnectionFactory.java:290) at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(Bas icDataSource.java:877) rce.java:851) ... 41 more Jun 20, 2008 12:43:10 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: null SEVERE: Cannot create PoolableConnectionFactory (Login failed for user 'sa'.) Jun 20, 2008 12:43:10 PM com.gssamerica.expensereporting.business.dao.StatusHome StatusList SEVERE: find by example failed org.hibernate.exception.GenericJDBCException: Cannot open connection at org.hibernate.exception.SQLStateConverter.handledNonSpecificException (SQLStateConverter.java:103) at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.j ava:91) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelp er.java:43) er.java:29) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager .java:420) Installed SQL express 2005 [ SQLEXPR_ADV ]. Operating system vista. I couldn't create locally a connection under object explorer. The following error pops up : TITLE: Connect to Server ------------------------------ Cannot connect to BINYAM Help Help I have been successfully running a connnection to my sql server express 2005 from several computers on our local network. However I am having problems now with two pc's after I changed the windows user account name and password. This is probably a simple fix, but where can I fix this. Do I need to accordingly setup/change a windows user account on the server? Any advice is much appreciated! I should clarify that it is the username on the 2 client pc's that have been changed, which seemed to cause a problem logging on to SQL on the server. <a href= >movie ticket graphic</a> <a href= >girls caught on camera</a> <a href= >massey ferguson 8150</a> <a href= >french country bedroom picture design in massachusetts</a> <a href= >columbine- cassie</a> This is for your information. If you want to understand the dynamic allocated port on SQL Server. This KB article is pretty good. So when SQL Server allocate a port you can almost be sure it stay that port until you change it manually. localhost\SQLEXPRESS gives Unrecognized Escape Sequence Perhaps you are using C# and not using @ string prefix, then try: localhost\\SQLExpress Or you can use @ prefix on connection string, then you do not need to escape backslash: @"Server=localhost\SQLExpress;..." I am almost getting frustrated please help me. I need to establish a connection to the sqlexpress on my system from my visual c#. I am programming a smart device using visual studio 2005. I did the following I enabled remote connections from the surface area config for tcp and named pipes I started the browser I could connect from MS visual studio The server is up and running but when i tried to connect from visual studio using the code using System; using System.Collections.Generic; using System.Text; using System.Data.SqlClient; using System.Windows.Forms; using System.IO; using System.Reflection; using System.Data; namespace mQAQI { class sqlDatabaseUtil { SqlConnection dataConnection = new SqlConnection(); public void getConnection() { try { String connString = "Data Source=abudawe\\sqlexpress;Initial Catalog=mQAQI;Integrated Security=True"; //String connString = "Data Source=xx.sdf"; dataConnection.ConnectionString = connString; dataConnection.Open(); MessageBox.Show("Connected to database successfully."); } catch (FileNotFoundException fe) { MessageBox.Show(fe.StackTrace + "Error Accessing the database."); MessageBox.Show(fe.Message); catch (SqlException sqle) MessageBox.Show(sqle.StackTrace + "Error Accessing the database."); MessageBox.Show(sqle.Message); catch (IOException io) MessageBox.Show(io.StackTrace + "Error Accessing the database."); MessageBox.Show(io.Message); /* catch (Exception e) MessageBox.Show(e.StackTrace + "Error Accessing the database."); Console.WriteLine(e.StackTrace); }*/ finally dataConnection.Close(); } } } } I get the following message "specified server not found: abudawe\sqlexpress" Please what am i doing wrong. I have battled with this issue for the past 4 days I repeatedly get the message "Microsoft SQL Server 2005 was unable to install on your computer." I uninstall the and reinstall SQL and FFI software and same message. Error signature EventType: sql90setup P1: unknown P2: 0x643 P3: unknown P4: 0x643 P5: unknown P6: unknown P7: msxm16.msi@6.20.1099.0 Error Report Contents c:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\SqlSetup0002.cab What do I do? I am using Sql server 2005 express with my app writen in vb6. I have a problem that after a day of work (when I come in the morning ) the app is working very slow. So when I am opening the sql managment tool or restarting the sql service everything is back to noraml. i was to trying to configure the database to auto close = false and also auto shrink is off. Please see i you can help... Thanks in advance Alon If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/sql_protocols/archive/2006/03/23/558651.aspx
crawl-002
refinedweb
5,505
50.53
Ads Via DevMavens"/> Here’s what the page looks like (no matter which of the three versions of the page you choose):: Interesting. But I'm not seeing the 'unobtrusive' part. ie. jQuery is 'unobtrusive'. I think unobtrusive would be completely separated from the html. Unobtrusive to me would be that I could have 3 html pages of the exact same content with 3 different css styles or js files that 'act upon' the html. The html would be untouched by it. To be honest, I'd rather see MS go a different route here: you have WPF for windows apps - xaml. Silverlight with xaml. How about creating xaml on the web as well ? Basically a xaml based view engine. Bertrand - Do you have any samples that would use the MVC project as a starting point? Data sources from controller methods as well as usage of the template engine in a project that has master pages (where the <body> is not present in each .aspx page. Thanks, Dennis @Steve: so if I'm following you, you think unobtrusive means that even the data should be introduced in the template by script? You sure can do that but then the advantages of using templates become pretty small then. But if that's your thing, sure, go ahead and do that. We do support jQuery after all... We had something closer to XAML in xml-script three years ago, also experimented with XAML for HTML and JS and it really didn't work very well. The problem is that you're not starting from scratch here like WPF was: you have HTML to deal with. We chose instead to use legal extensions to XHTML and we're quite happy with it. Your feedback is actually one of very few stating this is not the right approach out of overwhelmingly positive feedback. Not saying you're wrong, just that this works better for more people. @Dennis: this is just HTML, you can apply that to MVC very easily. You can for example use a JSON result as the server data source. For the master page scenario, you can actually do the namespace declaration on the dataview's tag, and it will work just as well as on body. You can also call Sys.Application.processNode or processNodes to avoid having to touch body to activate the DOM. Steve has a point about its remark kabout jquery. It is not unobtrusive. JQuery is a nice idea as it enables a quick way of doing some nice javascript thingies like animation and stuff. But look at it: <img sys: For me this is far from elegant coding. and yes I don't want to bother to learn another language or doing things differently. Let me explain a bit more. With asp.net you can do most of it server side right? Why not using .NET to do the (ajax)javascript for you and server side too? You have to abstract javascript away from it all. It's there but you don't program in it. All the js features like animation, effects, etc should be done server side. This is accomplished for example by gaiaware () see some introduction video's here I really like it (and use it) because it is unobtrusive, fast,elegant and does allow the creation of event handlers and components over the markup they generate because it is done server side. @Edward: nobody claimed the final declarative sample was unobtrusive. Now the markup in the section before that contains only one thing that is not pure HTML: the binding expressions. If you think this is out of place, I guess you would find *any* template engine not pure enough for your taste. Which is fine, you still have plenty of options, but you may be missing on some major productivity gains. Now the server-side approach to abstract the client-side stuff is something we've also done (and many, many others too, way before Gaiaware), and it works in many cases but nothing ever beats the pure client approach, if only because you can have a client-side representation of the data. This is a pretty innovative templating engine Bertrand! One quick question: What's the binding expression for the current JSON object? For example, in my template I have <a href="javascript:self.location.href=Util.getBusinessDetailsUrl({{this}}, 'myRatings')">{{Name}}</a> Util.getBusinessDetailsUrl is my own function that takes in the entire bound object, not just a field ("Name" is a field of my object for example). Is {{this}} correct? @James: the correct markup for this would probably be <a href="{{ Util.getBusinessDetailsUrl($dataItem, 'myRatings') }}">{{Name}}</a> Notice how the binding expression takes the whole attribute value.
http://weblogs.asp.net/bleroy/archive/2008/11/28/instantiating-components-on-template-markup.aspx
crawl-002
refinedweb
788
73.78
Today’s post isn’t directly related to VR, but it is a technique I use in all the VR apps I build in Unity3d which produces cleaner code. It’s very common to have code that is waiting to receive an event and when it occurs to go and do something about it. Unity game objects have the SendMessage, SendMessageUpwards and BroadcastMessage methods to send an event up or down it’s game object hierarchy. But these are generally discouraged because the event is a string and there is no guarantee you didn’t put in a typo. Unity C# code has delegate events for adding/removing event listeners and then handling the code. With those you end up with code littered with linkages between classes which can make it harder to maintain and test your code. Unity Events Unity has another type of event based on the class UnityEvent. The UnityEvent object is supported in the scene hierarchy so you can add/remove event handlers in the inspector without writing any code! The advantage is it is easy to fire events, and easy to wire up event handlers in the scene. It keeps component linkages separate from the code. Here’s what publishing an event looks like: using UnityEngine.Events; public class ExampleFiresEvents : MonoBehaviour { public UnityEvent onSomethingHappenedEvent; void Update () { if (Input.GetButtonDown ("Fire1")) { onSomethingHappenedEvent.Invoke (); } } } And then add a public void SomeMethod() method call in another c# script attached to a game object. Then use the inspector to wire up that game object and choose that method to receive the event: That’s it, simple call Invoke() on your event and all event listeners will be called. Internally, I believe this still uses the c# delegate system, which is why if you want to add/remove listeners via code, you can still do that using AddListener and RemoveListener. Demo
http://talesfromtherift.com/a-better-way-to-hook-up-events-in-unity3d/
CC-MAIN-2017-47
refinedweb
311
61.06
Hadoop 1.x Architecture is a history now because in most of the Hadoop applications are using Hadoop 2.x Architecture. But still understanding of Hadoop 1.x Architecture will provide us the insights of how hadoop has evolved over the time. As explained in my post related to Hadoop Core Concepts, Hadoop mainly provides a distributed storage (HDFS) and distributed computation engine (MapReduce) to solve certain problems of Big Data World. Both of these core components have some set of processes(daemons). HDFS – Hadoop Distributed File System HDFS in Hadoop 1.x mainly has 3 daemons which are Name Node, Secondary Name Node and Data Node. Name Node - There is only single instance of this process runs on a cluster and that is on a master node - It is responsible for manage metadata about files distributed across the cluster - It manages information like location of file blocks across cluster and it’s permission - This process reads all the metadata from a file named fsimageand keeps it in memory - After this process is started, it updates metadata for newly added or removed files in RAM - It periodically writes the changes in one file called editsas edit logs - This process is a heart of HDFS, if it is down HDFS is not accessible any more Secondary Name Node - For this also, only single instance of this process runs on a cluster - This process can run on a master node (for smaller clusters) or can run on a separate node (in larger clusters) depends on the size of the cluster - One misinterpretation from name is “This is a backup Name Node” but IT IS NOT!!!!! - It manages the metadata for the Name Node. In the sense, it reads the information written in edit logs (by Name Node) and creates an updated file of current cluster metadata - Than it transfers that file back to Name Node so that fsimagefile can be updated - So, whenever Name Node daemon is restarted it can always find updated information in fsimagefile Data Node - There are many instances of this process running on various slave nodes(referred as Data nodes) - It is responsible for storing the individual file blocks on the slave nodes in Hadoop cluster - Based on the replication factor, a single block is replicated in multiple slave nodes(only if replication factor is > 1) to prevent the data loss - Whenever required, this process handles the access to a data block by communicating with Name Node - This process periodically sends heart bits to Name Node to make Name Node aware that slave process is running Hadoop 1.x has a Single Point Of Failure As per above description, HDFS has only one Name Node so if that process or machine goes down complete cluster goes down. That is why Name Node in Hadoop 1.x is considered to be a Single Point of Failure. MapReduce – Distributed Computation As explained in my post Simple explanation of Hadoop Core Components, MapReduce component is responsible for distributed computing on Hadoop Cluster. Basically it uses two daemons named Job Tracker and Task Tracker. Hadoop MapReduce algorithm basically has two phases Map and Reduce Job Tracker - Only one instance of this process runs on a master node same as Name Node - Any MapReduce job is submitted to Job Tracker first - Job Tracker checks for the location various file blocks used in MapReduce processing - Than it initiates the separate tasks on various Data Nodes(where blocks are present) by communicating with Task Tracker Daemons Task Tracker - This process has multiple instances running on the slave nodes(typically runs on the slave nodes where Data Node process is running) - It receives the job information from Job Tracker daemon and initiates a task on that slave node - In most of the cases, Task Tracker initiates the task on the same node where there physical data block is present - Same as Data Node daemon, this process also periodically sends heart bits to Job Tracker to make Job Tracker aware that slave process is running Full picture of Hadoop 1.x Architecture So after this small set of information if we look at the Hadoop 1.x architecture as a whole, it will look like below image. Limitations of Hadoop 1.x Architecture - As we also have seen in above description, major drawback of Hadoop 1.x Architecture is Single Point of Failure as there is no backup Name Node. - Job scheduling, resource management and job monitoring are being done by Job Tracker which is tightly coupled with Hadoop. So Job Tracker is not able to manage resources outside Hadoop. - Job Tracker has to coordinate with all task tracker so in a very big cluster it will be difficult to manage huge number of task trackers altogether. - Due to single Name Node, there is no concept of namespaces in Hadoop 1.x. So everything is being managed under single namespace. - Using Hadoop 1.x architecture, Hadoop Cluster can be scaled upto ~4000 nodes. Scalablity beyond that may cause performance degradation and increasing task failure ratio. I hope you will like this information. Stay tuned for other interesting information and please let me know your thoughts by writing comments below. April 5, 2016 at 11:52 am its really good….. July 7, 2018 at 8:45 am Thanks for the article and detailed information of the technology. It helped a lot to understand the 1.x version and what led to version 2.x of Hadoop
http://backtobazics.com/big-data/hadoop/understanding-hadoop-1-x-architecture-and-its-demons/
CC-MAIN-2018-30
refinedweb
908
54.97
function scope Discussion in 'Python' started by Baris Dem Scope - do I need two identical classes, each with different scope?ann, Sep 12, 2005, in forum: Java - Replies: - 13 - Views: - 719 - Patricia Shanahan - Sep 13, 2005 How do namespace scope and class scope differ?Steven T. Hatton, Jul 18, 2005, in forum: C++ - Replies: - 9 - Views: - 545 - Kev - Jul 19, 2005 Re: Lexical scope vs. dynamic scopeXah Lee, Feb 26, 2009, in forum: Java - Replies: - 0 - Views: - 2,315 - Xah Lee - Feb 26, 2009 Having trouble understanding function scope and variable scopeAndrew Falanga, Nov 22, 2008, in forum: Javascript - Replies: - 2 - Views: - 241 - Andrew Falanga - Nov 22, 2008
http://www.thecodingforums.com/threads/function-scope.668495/
CC-MAIN-2015-14
refinedweb
108
61.7
Dart Twitter API A Project that was made to make using the Twitter API from flutter and dart a bit easier. This package contains a high level functionality for connecting to the Twitter API. Provice the secret and public keys, and then make the request to Twitter. The package will handle all of the authentication. This only works for application authentication. User authentication is not implemented and is not planned for the future. How to Use Below is an example of how the package would be used. import 'package:twitter_api/twitter_api.dart'; // Creating the twitterApi Object with the secret and public keys // These keys are generated from the twitter developer page // Dont share the keys with anyone final _twitterOauth = new twitterApi( consumerKey: consumerApiKey, consumerSecret: consumerApiSecret, token: accessToken, tokenSecret: accessTokenSecret ); // Make the request to twitter Future twitterRequest = _twitterOauth.getTwitterRequest( // Http Method "GET", // Endpoint you are trying to reach "statuses/user_timeline.json", // The options for the request options: { "user_id": "19025957", "screen_name": "TTCnotices", "count": "20", "trim_user": "true", "tweet_mode": "extended", // Used to prevent truncating tweets }, ); // Wait for the future to finish var res = await twitterRequest; // Print off the response print(res.statusCode); print(res.body); Requests Only application authentication is supported by this package. There are two different types of paramters that will be needed. Depoendant and Independant The first type is the independant paramters. These are things that every single request will need, no matter the content you are trying to get. These values include: - A consumer API key - A consumer API secret key - An access token - A secret access token These are the keys that are used to authenticate your request. Make sure they are correctly entered. You should keep these secret and private. Next are the dependant variables. These are things that depend on the type of request you are trying to make. These values include: - method: HTTP Method - This is a required parameter of type String - GET or POST - url: The endpoint you are trying to reach - This is a required paramter of type String - Some examples on the twitter website include /1.1/ but do not include that when making requests here. That part is already added internally - options: The parameters of the request you are trying to make - This is an optional paramter of type Map<String, String> - These are things like which user you are trying to view, how many tweets you are trying to get, whether to strip user info from the response or not - The full list of these parameters can be found at the Twitter Developer Website Response Below is a truncated example of the data that is returned from the above example. It is initially sent as a string and needs to be parsed to get to this state. To convert it from a string to a List of Maps: import 'dart:convert'; var tweets = json.decode(res.body); The data that comes out of the request after converting it: var resBody = [ { "created_at": "Wed Oct 02 23:28:13 +0000 2019", "id": 1179538608740556800, "id_str": "1179538608740556800", "full_text": "52 Lawrence West and 952 Lawrence West Express: Detour westbound via Culford Rd, Maple Leaf Dr and Jane St due to emergency sewer repair.\nhttps:\/\/t.co\/jPSDy5TW8Q", "truncated": false, "display_text_range": [0, 161], "entities": { "hashtags": [], "symbols": [], "user_mentions": [], "urls": [ { "url": "https:\/\/t.co\/jPSDy5TW8Q", "expanded_url": "https:\/\/twitter.com\/TTCnotices\/status\/1179537904022016000", "display_url": "twitter.com\/TTCnotices\/sta\u2026", "indices": [138, 161] } ] }, ": true, "quoted_status_id": 1179537904022016000, "quoted_status_id_str": "1179537904022016000", "quoted_status_permalink": { "url": "https:\/\/t.co\/jPSDy5TW8Q", "expanded": "https:\/\/twitter.com\/TTCnotices\/status\/1179537904022016000", "display": "twitter.com\/TTCnotices\/sta\u2026" }, "quoted_status": { "created_at": "Wed Oct 02 23:25:25 +0000 2019", "id": 1179537904022016000, "id_str": "1179537904022016000", "full_text": "52 Lawrence West and 952 Lawrence West Express: Detour westbound via Culford Rd, Maple Leaf Dr and Jane St due to a collision.", "truncated": false, "display_text_range": [0, 126], "entities": { "hashtags": [], "symbols": [], "user_mentions": [], "urls": [] }, ": false, "retweet_count": 1, "favorite_count": 1, "favorited": false, "retweeted": false, "lang": "en" }, "retweet_count": 0, "favorite_count": 0, "favorited": false, "retweeted": false, "possibly_sensitive": false, "lang": "en" }, ]
https://pub.dev/documentation/twitter_api/latest/
CC-MAIN-2019-43
refinedweb
657
52.7
Xcode is becoming increasing popular as an IDE to develop external objects for Max and MSP (Jitter support for Xcode is not yet supported, but is forthcoming). It offers several benefits over using Code Warrior, not least of which is that Xcode is free. However, if you’ve gotten used to the comforts of CodeWarrior, then Xcode can seem rather bizarre and alien. In this article we will take a step-by-step approach on how to write externals from scratch using Apple’s latest developer tools. We will not discuss the source code itself very much, as that information is well covered in the MaxMSP Software Development Kits. We will also approach this topic in a tutorial-like style. That means that we will let a few things slip in order to see what some common errors look like and how we can go about solving them. Update [2006-1-26 8:53:54 by tim]: This article was originally written for Xcode 2.1, it has now been updated for Xcode 2.2. TAP. Here we go: - Download the SDK - Copy the c74support folder into the /Library/Application Support folder. - Create a new project. This can be done by selecting the File > New Project… menu item. You will be presented with the assistant dialog. Choose Carbon Bundle and click next. - Now you need to name your project and create it. For the purpose of this article, we will create a simple external that rounds floating-point numbers to the nearest integer. We will call this project roundand save it in our home (~) folder. - You should now see Xcode’s main interface window. - The empty project that Xcode created for us has a source file called main.c. We could use this as-is, but to make things easier on ourselves later we will rename the source file to round.c. - In order to make this project work with Max, we have to add Max’s glue framework. Right-click (control-click) on the Frameworks folder, and choose Add > Existing Frameworks…. . The Framework is located at /Library/Frameworks as shown in the screenshot . - Go to Project > Edit Active Target in the menu. In the Build tab, we need to tell Xcode about our search paths (where it needs to look for files). Scroll around in the list until you find the Header Search Paths. Fill it in with “/Library/Application Support/c74support/max-includes” (it is essential that you DO use the the quotes, as Xcode uses whitespace to separate multiple paths) - We have a couple of more things to do while still in the Build tab: - Now switch to the Properties tab in the Active Target editor. Change the type to iLaX. Then we can close the Active Target editor. - At this point, we are done making changes to the project settings. To make sure that we haven’t made a mistake, type the following code into our source file – round.c – and compile the code by clicking on the hammer icon or choosing Build > Build from the menu. int main(void) { return 0; } If all went well, it should say “Succeeded” in the lower right-hand corner of the main project window. - Now lets put in the actual code for our object. We won’t comment much on the code, but it does use the newer Obex methods for defining the class, attributes, methods, etc. that was introduced with Max 4.5. #include "ext.h" // Max Header #include "ext_strings.h" // String Functions #include "commonsyms.h" // Common symbols used by the Max 4.5 API #include "ext_obex.h" // Max Object Extensions // Data Structure for this object typedef struct _round{ t_object ob; // Must always be the 1st field; used by Max void *obex; // Pointer to Obex object void *outlet; // Pointer to outlet } t_round; // Prototypes for methods: need a method for each incoming message void *round_new(long value); void round_float(t_round *x, double value); void round_int(t_round *x, long value); void round_assist(t_round *round, void *b, long m, long a, char *s); // Globals t_class *this_class; // Required: Global pointing to this class /**************************************************************************/ // Main() Function - Object Class Definition int main(void) { long attrflags = 0; t_class *c; t_object *attr; common_symbols_init(); // Define our class c = class_new("round",(method)round_new, (method)0L, (short)sizeof(t_round), (method)0L, A_DEFLONG, 0); class_obexoffset_set(c, calcoffset(t_round, obex)); // Make methods accessible for our class: class_addmethod(c, (method)round_int, "int", A_LONG, 0L); class_addmethod(c, (method)round_float, "float", A_FLOAT, 0L); class_addmethod(c, (method)round_assist, "assist", A_CANT, 0L); class_addmethod(c, (method)object_obex_dumpout, "dumpout", A_CANT,0); class_addmethod(c, (method)object_obex_quickref, "quickref", A_CANT, 0); // Finalize our class class_register(CLASS_BOX, c); this_class = c; return 0; } /**************************************************************************/ // Object Life // Create an instance of our object void *round_new(long value) { t_round *x; x = (t_round *)object_alloc(this_class); if(x){ object_obex_store((void *)x, _sym_dumpout, (t_object *)outlet_new(x,NULL)); x->outlet = intout(x); // Create the outlet } return(x); // return pointer to the new instance } /**************************************************************************/ // Methods bound to input/inlets // Method for Assistance Messages void round_assist(t_round *x, void *b, long msg, long arg, char *dst) { if(msg==1) // Inlets strcpy(dst, "(int/float) number to round"); else if(msg==2) // Outlets strcpy(dst, "(int) rounded number"); } // INT input void round_int(t_round *x, long value) { outlet_int(x->outlet, value); } // FLOAT input void round_float(t_round *x, double value) { long out; if(value > 0) out = ((long)(value + 0.5)); else out = ((long)(value - 0.5)); outlet_int(x->outlet, out); } - Now we can try to compile it again, but we get an error. This error is caused because in our source we included Max 4.5′s helper utilities for working with symbols. We included the header file (commonsyms.h) but we also need to add the actual C code to our project. Locate the file commonsyms.c and drag it into the source folder in your project. It present a sheet with some options. We can just click the Add button. This results in the file being present in the source folder, where Xcode can access it. - Try compiling again now – click the hammer. If you’ve followed all of the directions, you should see that it is a success! - Before we pat ourselves on the back too hard though, we better test the object in Max… Back in the Finder, we need to locate our built external. It should be in the build directory of our project folder, as shown in the following screenshot. We can either add this folder to our Max searchpath, or copy the external into Max’s searchpath. Just make sure that Max can find it. - Now start up max and lets try it out. A word of warning: This has nothing to do with Mach-O or Xcode, but we called our external “round”. As of this writing there are a lot of people with patches called “round” or something similar. Furthermore, while Max does not currently ship with an external called “round” it is always possible that it could do so in the future. All developers are advised to name externals with a unique identifier in the name. As an example, my initials are TAP – so all of my externals start with that (such as a tap.delay~, which also makes a nice pun). We can tell in this case that the correct object has loaded because it has two outlets. It would be pretty rare to find other rounding objects with 2 outlets. The second outlet is created for dumping out state information using the Obex/Pattr system. More on that in future writings. In the meantime, enjoy getting to know Xcode. Even if you love CodeWarrior, Xcode is going to be in your future as Intel-based Macintoshes are not likely to run your CW-compiled externals. Now is good time to get acquainted! Writing Externals with Xcode 2.2 You must be logged in to reply to this topic.
http://cycling74.com/2005/10/05/writing-externals-with-xcode-22/
CC-MAIN-2013-48
refinedweb
1,305
71.95
Web applications are arguably the most common type of application developed these days. The browser is ubiquitous and writing a Web application ensures that your application will be able to run everywhere (although you'll probably need to take some care to ensure it looks good on different form factors). Web applications do a lot and writing them from scratch takes a great deal of effort. A lot of the hard work is generic and is not related to the application itself. In this article, I'll introduce you to Flask — a Python Web framework that takes care of much of the hard work and allows you to focus on the essence of your application. Flask is a very flexible and modular Web framework. It is built on top of Werkzeug, which takes care of all the low-level WSGI and HTTP details. Flask adds templating, routing, testing capabilities, organizational patterns via blueprints and a sensible extension mechanism with a huge number of useful extensions. Let's see what a simple application looks like. If you want to follow along, install Flask first. The beauty of Flask is that it scales from really simple one file applications to serious multi-module and multi-package applications. from flask import Flask app = Flask(__name__) @app.route('/') def hi_there(): return 'Hi There!' if __name__ == '__main__': app.run() The Web application is an instance of the Flask class. The route is defined using the @app.route decorator. The decorated hi_there() function is the Web method that is called when you navigate to the root of the application. Flask comes with a built-in test server, so to test your application locally just run this file. Flask will launch its test server on port 5000 by default and you can go in your browser to and see the following message: This is super quick and easy, but typically you would want to return HTML/CSS/JavaScript and not just plain text. Flask has got you covered with Jinja2 templates. Let's extend our little application to return the current time. I'll add some structure as follows: . ??? app.py ??? templates ??? app.html Our main app.py file now looks like: from flask import Flask, render_template import date as datetime app = Flask(__name__) @app.route('/') def hi_there(): now = datetime.now().replace(microsecond=0) return render_template('app.html', now=now) if __name__ == '__main__': app.run(debug=True) It imports the render_template() from flask and instead of return "Hi There!" as before, it renders the template "app.html" passing it the current time. By convention (can be configured), Flask is looking for templates in the templates sub-directory, which is exactly where I put the "app.html" file. Let's take a look: <html> <body background=""> Hi There! The time is {{ now }} </body> </html> That file looks suspiciously like a regular HTML file. Jinja2 templates are pretty much valid HTML files, but with some special syntax for injecting dynamic code and substituting variables. In this case "{{ now }}" will be replaced with the value of the "now" variable that the hi_there() function passed to render_template(). Every time someone browses to our application they'll get the current server time. If you run the application again and browse to you should see this beautiful result: All this is barely scratching the surface. Flask and its extensions provide everything you need for full-fledged applications from login and user management to forms and even REST APIs. Strong database integration with Flask-SQLAlchemy. It supports organizing large applications into components using blueprints and can be deployed on all major hosting platforms and Web servers as it is a standard WSGI framework too. Flask is a great Web framework. Very easy to get going, but scales for complex systems. It is very popular and has lots of excellent extensions. On top of it all, it has superb documentation, books and tutorials. If you are considering Web development using Python on the back end, I recommend you take a serious look at Flask. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/webdev/getting-started-with-the-flask-web-framework.html
CC-MAIN-2017-26
refinedweb
699
66.23
First time here? Check out the FAQ! Agree with you. I got almost the same issue some days ago, at last I changed the image, and it became OK. From this error information: " [Errno 13] Permission denied: '/var/lib/neutron/dhcp/31dbd35d-0269-42ae-a08e-3112b0401acd/tmpINZEM5'", it seems that it was result by file or directory permission issues. Can you check that ? From I have known, 9292 is for glance, and 5000 is for keystone. I stop my glance registry service, and got your same error msg. Please make sure your glance service has already started by using "service openstack-glance-registry status" or "service glance-registry status". There are so many factors can result this issue. This is a very useful link about neutron network trouble shooting. Hope can help you. No, from what I have debugged, I think it is a bug. At first you must make sure the attached security group has added the ssh port(22). Then, you can use "telnet ip_addr 22" to check if you can access ssh port, at most time, if you can access this port, means you have start the ssh service. However, the easiest way is use "nova get-vnc-console instance_id novnc" to connect the instance, the precondition is you know the password and can log in. Hi guys, I found the admin role can boot an instance with a non owned network, but can not be delete it in my env. However, change the non owned network to shared, than, the instance can be deleted. My system is centos 6.5 and the yum repo site is "". The following are what I have did. Use "nova list" or "nova show" to check if the instance was deleted.(In this step, I can see the instance status become activity again). Update the network to be shared. I don't know whether this is result by my configuration, otherwise, it maybe a bug. Someone can help me to check if it is the same in your env ? Thanks. Installed all-in-one mode by devstack with the following localrc: ADMIN_PASSWORD=passw0rd MYSQL_PASSWORD=root RABBIT_PASSWORD=stackqueue SERVICE_PASSWORD=$ADMIN_PASSWORD LOGFILE=$DEST/logs/stack.sh.log LOGDAYS=2 CINDER_BRANCH=stable/havana GLANCE_BRANCH=stable/havana HORIZON_BRANCH=stable/havana KEYSTONE_BRANCH=stable/havana KEYSTONECLIENT_BRANCH=stable/havana NOVA_BRANCH=stable/havana NOVACLIENT_BRANCH=stable/havana NEUTRON_BRANCH=stable/havana SWIFT_BRANCH=stable/havana SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_REPLICAS=1 SWIFT_DATA_DIR=$DEST/data FLOATING_RANGE=192.168.0.224/27 FIXED_RANGE=10.10.1.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth1 disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-lbaas enable_service q-meta enable_service q-metering enable_service neutron # Optional, to enable tempest configuration as part of DevStack enable_service tempest enable_service heat h-api h-api-cfn h-api-cw h-eng enable_service n-cell SERVICE_TOKEN=passw0rd After finished the installation, can use nova boot to boot instance, but if use "nova service-list" or "nova host-list" show the services and hosts, it will return an error msg: 2013-12-30 07:56:14.910 ERROR object [req-005a967f-97be-4401-8afb-c0c562402405 admin admin] Error setting Service.id 2013-12-30 07:56:14.910 TRACE object Traceback (most recent call last): 2013-12-30 07:56:14.910 TRACE object File "/opt/stack/nova/nova/objects/base.py", line 70, in setter 2013-12-30 07:56:14.910 TRACE object field.coerce(self, name, value)) 2013-12-30 07:56:14.910 TRACE object File "/opt/stack/nova/nova/objects/fields.py", line 166, in coerce 2013-12-30 07:56:14.910 TRACE object return self._type.coerce(obj, attr, value) 2013-12-30 07:56:14.910 TRACE object File "/opt/stack/nova/nova/objects/fields.py", line 231, in coerce 2013-12-30 07:56:14.910 TRACE object return int(value) 2013-12-30 07:56:14.910 TRACE object ValueError: invalid literal for int() with base 10: 'region!child@1' 2013-12-30 07:56:14.910 TRACE object 2013-12-30 07:56:14.917 ERROR nova.api.openstack [req-005a967f-97be-4401-8afb-c0c562402405 admin admin] Caught error: invalid literal for int() with base 10: 'region!child@1' 2013-12-30 07:56:14.917 TRACE nova.api.openstack Traceback (most recent call last): 2013-12-30 07:56:14.917 TRACE nova.api.openstack File "/opt/stack/nova/nova/api/openstack/__init__.py", line 121, in __call__ 2013-12-30 07:56:14.917 TRACE nova.api.openstack return req.get_response(self.application) 2013-12-30 07:56:14.917 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, in send 2013-12-30 07:56:14.917 TRACE nova.api.openstack application, catch_exc_info=False) 2013-12-30 07:56:14.917 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, in call_application 2013-12-30 07:56:14.917 TRACE nova.api.openstack app_iter = application(self.environ, start_response) 2013-12-30 07:56:14.917 TRACE nova.api.openstack File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__ 2013-12-30 07:56:14.917 TRACE nova.api.openstack return resp(environ, start_response) 2013-12-30 07:56:14.917 TRACE nova.api.openstack File "/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", line 581, in __call__ 2013-12-30 07:56:14.917 TRACE nova.api.openstack ... What's your mean of does not work? Can you check whether vm has got an IP ? Or if it got one, you can check your tap device's tag number. If you use cirros, you can use "nova console-log" to see the boot console log where you can check whether it got ip. And use "ovs-vsctl show" to show the tag number, but the premise is that you used the openvswitch. I meet many network troubles in distributed env, and this link is helpful for me. Yes. Thanks What kind of yum repo should I use if I I'm not the redhat customer ? Is it this one " " ? In rhel 6.5, we don't have to update kernel, because the kernel already support it. But the preinstalled package "iproutexxxxx" don't support netns, so I use rpm -e to remove it, and download the right package from, after installed it, it support namespace now. I think this is for rhel 6.4, but thanks, I have got it how to let it work. I must install iproute-2.6.32-130.el6ost.netns.2.src.rpm package. Thanks, got it. When I use 'ip netns list', it raises an error msg. Hi, dheeru, thanks for your answer. I know that the different urls have different access authority. But I don't know when or where will use those different urls. E.g, which kind of urls will be used when nova exchange messages with keystone or neutron ? Dose anyone know how to let rhel 6.5 support network namespace ? Redhat said the new release rhel 6.5 are support network namespace in release note, but I found the "ip netns" relation commands are still not work. ? It is not difficult to understand that a floating ip is needed in openstack or in some other cloud platform, just like amazon. Just think, how customers connect their vms from external network without floating ips ? How customers let their webapps can be visited without floating ips ? Just like the user in an internal network and connect external network by nat, but the user in external network can not connect the internal one if do not do the port mapping. OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license.
https://ask.openstack.org/en/users/1778/huwei-xtu/?sort=recent
CC-MAIN-2020-50
refinedweb
1,316
62.24
Sometimes its hard to determine whether to use a property or a method in C#. There are however guidelines to help you choose the right feature to use. Month: March 2011 Link Directly to a Sitecore Item in a Custom Editor Learn how to make a custom Sitecore editor link directly to an item using the built-in JavaScript tools. Use Namespace Aliases for Ambiguous Sitecore Class Names Easily get over namepsace ambiguity with Sitecore classes by using C#’s namespace aliases. Right-Click Attach To Process in Visual Studio Here’s a tip on how to tweak your Visual Studio code window context menu to include the Attach to Process debug command. Using the DataSource Field with Sitecore Sublayouts Sitecore sublayouts provide modular presentational pieces to Sitecore sites. Learn how to assign specific data to these components. Handling Multiple Hostnames in a Sitecore Multi-Site Solution Learn a quick trick to handle hostnames with and without subdomains in a Sitecore multi-site solution.
http://firebreaksice.com/2011/03/
CC-MAIN-2017-17
refinedweb
164
59.84
A Graphical Explanation Of Javascript Closures In A jQuery Context Over the weekend, I was working on my in-depth jQuery presentation for the New York ColdFusion User Group. As part of the presentation, I wanted to discuss the beauty of Javascript "closures" and how jQuery makes tremendous use of them. Javascript closures can be a very hard thing to wrap your head around, especially when you are faced with vague definitions like:. Like I said, closures can be a hard thing to really understand. As such, I wanted to try and explain them [closures] with the help of some graphics because, as they say, a picture is worth a thousand words. First, let's look a simple jQuery demo: In this code, we are adding click event handlers to all of the links in the document. These click event handlers, when triggered, will alert the index of the given link in the context of the entire set of document links. Of course, the intent of the code is secondary to the demonstration. What's of primary importance here is structure of the code. Take a look at the functions that are being defined: Notice that we are defining three anonymous methods in this demo. Each of these methods is defined within a parent context. The outer-most method is defined in the context of the window (theoretically based on the visible code); the middle method is defined in the context of the outer-most method; and the inner-most method is defined in the context of the middle method. In Javascript, a given context always has access to its parent context. To be honest, the mechanisms behind this visibility are a bit beyond my purview, but I know it has something to do with scope chains (or is it prototype chains - definitely some sort of chain). As such, each method in our demonstration has access not only to its locally defined variables, but also the variables available in its parent context: Notice that in the inner-most method, it has to move up one scope in the parent chain to find "intLinkIndex," but it has to move up two scopes in the chain to find the "jLinks" variable. While it might not be obvious, the two-parent-context-jump is following the same exact rules - first, the inner-most method looks for jLinks in its own context. Since it's not a local variable, it can't find it. Then, it asks its parent context (the middle method) for the variable. The middle method checks its local scope and can't find it, so it passes this request up to its parent context, the outer-most method. And, because each context has access to its parent context, this variable request is easily passed up the context chain. Ok, so far so good; now, let's really get into closures. Keeping this whole parent context concept in mind, take a look at where these defined methods are ending up:; but, more importantly, these are also questions that the browser's memory management system can't answer (speaking with personification). And, because it doesn't know where these methods are ending up, it can't garbage collect (destroy) the original parent context of the method definitions. That's the power and beauty of the Javascript closure right there - see, even though the methods are being passed away from their original context, due to the scope chains in place, they still have access to their parent context and to all the variables available in their parent context. So, the real message to take away here is that when you pass a method reference out of the context in which it was defined, the method still has access to its parent context. And, once you understand and embrace this, it can be leveraged in some really cool ways (as seen in this jQuery example). This is not the most in-depth explanation of closures, but hopefully these graphics have helped you to better understand the closure: Good explanation -- one of the best explanations I've seen. There is also a great video on Advaned JS by Douglas Crockford that goes into details on how this works behind the scenes. Check it out.. its available in the YUI theater: that your post is an excellent explanation of a very confusing concept. Very nicely done. Thanks guys. I'll have to check out the YUI videos. I think I went there once and was just overwhelmed with the amount of content :) Time to get over that fear! Looking forward to the presentation next week! @Aaron, Oh heck yeah! You're a star!!! That's a really good explanation!!!!!!!! Thanks for that Ben What makes most of this possible is that JavaScript is "lexically scoped." This means that something is in the scope in which it was defined--not where it ends up. Wherever I wrote the function, it is in (and stays in) that scope. Lexical scoping and lambda (anonymous) functions combined with JavaScript's awesome chaining capabilities also make for some brilliant ways to namespace your projects and emulate class-like method/property access-control (i.e. public and private). It is what makes jQuery possible! Once you wrap your head around these concepts, JavaScript becomes a sexy and extremely flexible and powerful language. Yes, I called a programming language sexy! ...continued from my previous comment... I forgot to add that the chaining capabilities allow for "self executing" lambdas which is one of the coolest namespacing tools available. Great explanation, love the images. Nice to see such a pragmatic explanation, without to much detail to confuse the topic. @Jim, I have seen one or two examples of public/private emulation with Javascript. At the time, I didn't understand enough Javascript to see how that was working. Maybe it would be different now. Yeah, it is sexy stuff! A very thorough explanation of Javascript closures can be found here (take a deep breath) : >. too bad, because it's exactly what a closure is ! Free variables are identifiers in the function body that are not parameters. An environment is a list of names and their associated values, used to evaluate expressions. So there you have it : a closure is an expression that can look up an environment for the values of its free variables or , to use your own terms, an expression that has "access to their parent context" >... they can be answered though : in the case of jThis, the anonymous function is registered as a handler for the onclick event (think about it as being referenced by the <a> element itself) in the case of jLinks, the anonymous function is executed for each (index, obj) in jLinks, ... maybe that's what you call transient ? In any case once that block has executed, that function is gone... @Zorg, That is a great link and is, in fact, where I got the original definition for closures. As far as where the functions go, I am sorry if I did not communicate well; I understand that yes, we can look into the Click event wiring and actually see where it goes; what I mean to get across in a more abstract way is that when a method is passed out of its parent context, we should not assume anything about where it goes or how long it lives. And, in doing so, we can start to become comfortable with the idea that the variables in its parent scope must be kept around in case the method reference is ever executed. I wanted to get people comfortable with the idea of closures at a high level, not so much with the specifics of the behind the scenes. @Ben > I wanted to get people comfortable with the idea of closures at a high > level, not so much with the specifics of the behind the scenes. Sorry, I missed that :) @Zorg, Nothing to be sorry about at all my friend. Closures are a really complex concept for people to get. The more insight that people such as yourself can offer, the better off we all are going to be. Too complicated. Where did you learn this? Do you know of any good resource or book for learning functional JavaScript from ground-up? @dl JavaScript: The Good Parts Unearthing the Excellence in JavaScript By Douglas Crockford This is good to learn and many thanks to Douglas. @FM, I've heard that that is a very good book. I've translated your article into Russian. Thought you don't mind. All the backlinks are proveded. Here is the full version: Thank you for the great article! @Andrew, That's awesome!! Glad you felt it was worthy :) +1 Douglas Crockford : this is the best book about the functional core of JS. (don't miss his videos 7 on Javascript + 3 on DOM). for the OO model of Javascript see Liberman paper on prototype-based inheritance and see the papers on Self by Ungar. all this is freely available online @Zorg, Do you have a link to the Ungar paper? Self the power of simplicity, is a good place to start for the Lieberman paper This was great to understand. Thank you I think jQuery convert all the $("a") into objects. and then jLinks.each assign jThis.click method to each of the "a" objects. Thank you, that's the best explanation I have seen. Thanks for this, nice bit of graphical explanation but now I need to know more... :) @Joel, @Julian, Glad you guys are liking it. Images aren't appearing! @Richard, My server has been having some issues lately. Try hitting this blog post again and it should work. Sorry about the dips in performance. I especially liked your site posts, and so on through all of them beautiful I loved this explanation of closure with scope-chaining. Clearest ever article on closure!!! Thanks Ben vimal Germany! Ben, Loved this.... I recommended it to several folks who (like me) tend to see closures as just out of reach mentally :) -Mark Great post Ben! This is helping me to understand JavaScript more. Much appreciated. Wonder full effort Ben... really opens a new dimension of js variables. Thanks Alot Thanks for this article, but I fear you missed an important point. If variables in the outer context change, these changes affect the inner anonymous functions as well. That means: if you change the outer variable at some later time, the inner function sees the changed value! Not the value the variable had when the inner functions was created. For me, that's not a closure. JavaScript do not have real closures at all in my opinion due to that! Your code just appears to do what you think it should do, because your jLinks variable isn't changed during looping over all the 'a' elements. If it would change, all inner functions would see the last value this variable had. That's mostly not what the developer expects. I am used to Perl and closures in Perl do not have this flaw. All lexical variables from the outer scope are bound (with their actual value) into the inner function. The inner function may change the variable. All other instances of the same inner function do not see this change! In JavaScript this happens. To get a "real" closure in JavaScript you have to wrap the critical code, which likes to get the outer variables enclosed with their state on function creation time, you have to wrap around another function, which gets these parameters passed, and call it immediately, passing these parameters. That's what Prototypes bind() method is doing effectively. Please refer to as an example to show the difference. You see all links in the testbed1 div share the same i variable. Even copying this variable to another lexically scoped variable inside the loop doesn't help. The links in testbed2 do not suffer from this, since the click() callback is put inside a wrapper function. Each link has its own "copy" of the variable. The callback changes this variable and you see that the state is maintained for each link separately. I hope this helps clarifying this topic. I know your posting is 3 years old, but since it shows up inside the top 10 of a Google search for "jQuery Closure" I think it's worth add some stuff to it ;) I am new to Javascript and have read lots of the scary explanations of a Closure. This is the best and the most easy-to-follow explanation that I have read. Many thanks for this. Huh, this explanation made me wonder why closures are so confusing in the first place... :P made it seem pretty elementary to me. One thing though: so when people refer to "closures" as a noun -- e.g. "create a closure" --- they basically mean create an anonymous function within a method like .each() or .click() which take themselves to a new mystical context?
http://www.bennadel.com/blog/1482-a-graphical-explanation-of-javascript-closures-in-a-jquery-context.htm
CC-MAIN-2015-48
refinedweb
2,167
72.26
Hi , I am new to streamlit. While trying to upload a text file as shown in below code using @st.cache, I am getting error to use hash function on it. COuld anyone please help me to resolve this issue… df_uploaded = st.sidebar.file_uploader(‘Choose txt file:’,type = [‘txt’,‘csv’]) @st.cache(show_spinner= True) def load_data(file_uploaded): return (pd.read_table(file_uploaded,header=None,encoding=‘utf-8’)) if df_uploaded: temp = load_data(df_uploaded) **UnhashableType** : Cannot hash object of type _io.StringIO While caching some code, Streamlit encountered an object of type `_io.StringIO` . You’ll need to help Streamlit understand how to hash that type with the `hash_funcs` argument. For example: ``` @st.cache(hash_funcs={_io.StringIO: my_hash_func}) def my_func(...): ... ```
https://discuss.streamlit.io/t/hash-function-error-for-uploaded-text-file/2058
CC-MAIN-2020-34
refinedweb
117
54.39
[Adding libtool to the CC: list, since Bob indicates there are libtool and autoconf implications as well. The thread starts at <>.] On 12/26/2010 09:51 AM, Bruno Haible wrote: > So, when libposix becomes reality, it may be compiled with "gcc", thus > with a setting of > #define LINK_FOLLOWS_SYMLINKS 0 > But when it gets linked to a program that was compiled with "c99" or > "cc -xc99=all", then the link() function _will_ follow symlinks, > thus the link_immediate function will not perform as expected. Given the other problems that ensue on Solaris when one compiles and links to different standards, the simplest answer may be just "don't do that". It's not just the __xpg4 and __xpg6 stuff; it's also the _lib_version stuff: scanf behaves differently depending on which flavor of the -X option one passes to cc. It's quite a mess. If (despite the above) we do want to support compiling an application with cc -xwhatever or cc -Xwhatever, while linking to a library built in the default mode, the proposed change would appear to place a significant performance penalty for the (presumably more common) case of compiling and linking in the default mode. I would suggest something like the following patch instead, with a similar patch for link_follow, and with the appropriate m4 magic to make LINK_FOLLOWS_SYMLINKS a runtime test (__xpg4) on hosts like Solaris that have the __xpg4 variable. (Overall, though, it may be better not to poke a stick at this particular beehive. :-) diff --git a/lib/linkat.c b/lib/linkat.c index 73b1e3e..9b3550a 100644 --- a/lib/linkat.c +++ b/lib/linkat.c @@ -48,13 +48,17 @@ /* Create a link. If FILE1 is a symlink, either create a hardlink to that symlink, or fake it by creating an identical symlink. */ -# if LINK_FOLLOWS_SYMLINKS == 0 -# define link_immediate link -# else + static int link_immediate (char const *file1, char const *file2) { - char *target = areadlink (file1); + char *target = NULL; + int target_errno = 0; + if (LINK_FOLLOWS_SYMLINKS) + { + target = areadlink (file1); + target_errno = errno; + } if (target) { /* A symlink cannot be modified in-place. Therefore, creating @@ -89,11 +93,10 @@ link_immediate (char const *file1, char const *file2) free (target); free (dir); } - if (errno == ENOMEM) + if (target_errno == ENOMEM) return -1; return link (file1, file2); } -# endif /* LINK_FOLLOWS_SYMLINKS == 0 */ /* Create a link. If FILE1 is a symlink, create a hardlink to the canonicalized file. */
http://lists.gnu.org/archive/html/autoconf/2010-12/msg00061.html
CC-MAIN-2018-43
refinedweb
387
58.42
Oh and could you also update the build.xml to use Java 5, it won't build with 1.4 since enums weren't a part of the Java 4 spec. Advertising On 14 November 2010 10:32, mehdi houshmand <med1...@gmail.com> wrote: > I attempted to build the code on my home system (Windows 7, latest > java release 1.6.0_22) and I got the same thing. After some > investigation it turns our this is a known bug with maven (google > "maven enum bug"). It might be worth putting the semi-colon in there > to prevent anyone else facing this issue since it makes little > difference either way. > > Thanks > > Mehdi > > On 14 November 2010 09:17, mehdi houshmand <med1...@gmail.com> wrote: >> I was building from the command line, using "ant package" and >> referencing I've got the same version as you... Curious... Though I >> completely agree with your comments on TTFSubSetFile, it needs a >> little sprucing. >> >>> Hi Mehdi >>> >>> On 12.11.2010 16:32:45 mehdi houshmand wrote: >>>> Hi Jeremias, >>>> >>>> This code fails the build, you need to add a ";" (a semi-colon) to the >>>> last parameter in the enumerated type in >>>> o.a.f.fonts.truetype.TTFSubSetFile. >>> >>> I don't see that. Eclipse/ECJ is happy with it and the Sun JDK 1.5.0_22 >>> also doesn't have a problem when running the Ant build. Checking the JLS >>> 3.0, the semicolon is optional if there's no body content after the >>> entries. An example from the JLS: >>> >>> public class Example1 { >>> >>> public enum Season { WINTER, SPRING, SUMMER, FALL } >>> >>> public static void main(String[] args) { >>> for (Season s : Season.values()) System.out.println(s); >>> } >>> } >>> >>> What environment are you working with? >>> >>>> I was also curious why you made >>>> TTFSubSetFile.GlyphHandler? Why do you make it an interface, and why >>>> do you use an anonymous class in PSFontUtils, only to pass it back to >>>> the same class? If there's only one implementation and if it only >>>> contains a single method, I wouldn't have thought an interface was >>>> necessary. >>> >>> It's a normal callback interface from PSFontUtils back into >>> TTFSubSetFile, called for each glyph when building the subset. >>> >>>> TTFSubSetFile already contains various methods that perform >>>> similar functions (i.e. take an input, convert it to the necessary >>>> format and write to file), why couldn't this be implemented in the >>>> handleGlyphSubset(...) method? >>> >>> My main problem with the way TTFSubSetFile is currently written is that >>> writing the records is mixed with building the table index. If that were >>> not so, it would have been easier to go with an approach that you would >>> have expected. But my approach actually has the advantage that there's >>> less memory build-up, since not the whole subset including glyphs has to >>> be buffered in memory. After all, TTF loading is known to take a LOT of >>> memory. >>> >>>> Is there another implementation you're >>>> making this flexible for? >>> >>> No. The context: my client (your employer) asked for urgent help to >>> resolve the problem with my first attempt at TTF subsets when printed on >>> HP printers. I needed a quick resolution after I found out what could be >>> wrong. I didn't know if I would turn out to be right until after I >>> committed the changes and Chris/Vincent could run tests. So I didn't >>> care about too much code beauty. There's actually quite a bit of >>> copy/paste/change in TTFSubSetFile as a result which I'm not >>> particularly proud of. I'm still waiting for feedback if my change >>> really fixed the problem although preliminary results show that the >>> problem is now solved. I expect that some refactoring would do >>> TTFSubSetFile some good. >>> >>>> Also, from a design point, why have you made each glyph a single >>>> string? >>> >>> That was no design decision. It's a requirement found in the PS langref >>> third edition, page 356, describing the contents of /GlyphDirectory. >>> Each glyph is looked up by its index when an array is used. >>> >>>> Surely if the string must finish at a glyph boundary, then we >>>> could pack in several glyphs into the string and make it intelligent >>>> enough not to write partial glyphs? >>> >>> That would be useful if we were to keep putting the glyphs in the /sfnts >>> entry, but not with /GlyphDirectory. >>> >>>> Will this method have any performance benefits/disadvantages? >>> >>> The GlyphDirectory allows to keep memory consumption down in the JavaVM. >>> Otherwise, I see no implications. >>> >>>> The spec says 65535 is the array limit, will this be hit? >>> >>> I think that's unlikely. We will hardly have any font with more than >>> 65535 glyphs and no single glyph is likely to be larger than 64KB to >>> describe its outline. We might still run into problems with the /sfnts >>> entry, though. If we can improve TTFSubSetFile it should be much easier >>> to stop strings at record boundaries. >>> >>> <snip/> >>> >>> Jeremias Maerki >>> >>> >> >
https://www.mail-archive.com/fop-dev@xmlgraphics.apache.org/msg12865.html
CC-MAIN-2018-17
refinedweb
811
65.52
Building your first REST API How that scales up well to more complex applications. Flask accelerates development, while staying as lightweight as possible. (Go check out the official Flask documentation for some more details. There are also some minimum viable product examples to get you started.) Welcome to Flask - Flask Documentation (1.1.x) Welcome to Flask's documentation. Get started with Installation and then get an overview with the Quickstart . There is… flask.palletsprojects.com Flask is also very beginner friendly — Coming from a data science background, I was able to create my first basic application after just a few hours of fiddling. Now that that’s out of the way, let’s jump into actually creating our API. The code For this example, we will be building a simple API that will return a basic JSON payload with some mock data. We will also be securing our API using JWT authentication to ensure that no one can access the API without proper authentication. The basic steps that we will be following when creating and securing our application: - Step 1: Initialise the application - Step 2: Create the routes (endpoints) - Step 3: Secure the application using JWT If you don’t have Flask installed, go ahead and do so. We will also be using the Flask JWT Extended library. This library will be doing most of the heavy lifting when securing the API using JWT — which means less coding for us. pip install Flask pip install flask-jwt-extended We can now import the libraries that we need — we will be using jsonify to return a JSON payload, make_response to return a response with a valid HTML response status code and request to grab the authentication details that we pass as headers when we login. # Import the relevant librariesfrom flask import Flask, jsonify, make_response, request from flask_jwt_extended import jwt_required, create_access_token, JWTManager import json Once both libraries are installed and imported, we need to generate some mock data that we will be returning once the API is called, as well as some dummy authentication details. Normally, your authentication details will be stored in a database, but we will keep it simple for now by just hardcoding them. # Generate the test payload test_data = { 'test': ['this', 'is', 'a', 'test'] }# Create the login details username = 'admin' password = 'password' Step 1 — The next step is to create an initialise our application. We can then create our secret key that will be used for authentication. Normally, we would encrypt these using an algorithm (be sure to change your secret key to something more secure before deploying). We also set the validity period for the token as 3600 (1 hour). It’s up to you how long you want to set this validity period for, but it has to be long enough for a user to be able to make some subsequent calls before it expires. # Initialise the application app = Flask(__name__)# Update the secret key app.config['SECRET_KEY'] = 'my_precious' app.config['JWT_ACCESS_TOKEN_EXPIRES'] = int(3600)# Setup the Flask-JWT-Extended extension app.config["JWT_SECRET_KEY"] = "super-secret" jwt = JWTManager(app) JWTManager(app) Now that we have initialised the application and modified the configuration, we can add routes. Routes are the endpoints that we will be accessing. For our example, we will be creating two routes — - login (POST) — the endpoint that we will be accessing to generate the token - get_data (GET) — the endpoint that we will be accessing to return the mock data Step 2 — We can add our routes by using the “@app.route()” decorator function. We need to check that the credentials passed in the header matches the credentials that we created — if no credentials are specified, we reject the login and return a message asking the user to login. If the credentials match, we pass a valid token, indicating that the login was successful. # Create the login route - this is a POST @app.route('/login', methods=['POST']) def login(): auth = request.authorization if not auth or not auth.username or not auth.password: return make_response('Could not verify your details'.format(auth, auth.username, auth.password), 401, {'WWW-authenticate': 'Login required'}) user = username if not user: return make_response('Could not verify your details', 401, {'WWW-authenticate': 'Login required'})# Return a token if the login is successful if password == auth.password: token = create_access_token(identity=auth.username) return jsonify(token=token) Once we have logged in and received a token, we can call the “get_data” route by passing the token as a header. Let’s create the route. @app.route('/get_data', methods=['GET']) @jwt_required def get(): return jsonify(test_data) Step 3 — You’ll notice we added the “@jwt_required” decorator this time — this function specifies that this route is only accessible with a valid token. The flask-jwt-extended library handles the complexity for us and all we have to do is specify which routes we want to secure. The last step is to run the app. We run it in debug mode for now, since this is an example. # Run the application if __name__ == '__main__': app.run(debug=True) We now have a working application! Let’s test our API. Testing the application using Postman Now that our API is ready, we can run it by typing python main.py into the terminal. Once the app is running successfully, you should see a message indicating that it is running on your local host. We can now test our API using Postman. After configuring Postman with the local host URL — we first need to login to get a token using the “login” route that we created earlier. We can either login using basic authentication, or by passing a JSON object with the username and password. We will be passing the credentials as a JSON object in this example. (We won’t be covering Postman in too much detail, so check out this article if you need some help getting started) # The authentication details { ‘user’: ‘admin’, ‘password’: ‘password’ } This returns a JSON object with a token that is valid for the next hour. {"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpYXQiOjE2MTM2ODIyMjgsIm5iZiI6MTYxMzY4MjIyOCwianRpIjoiN2E3YzU1NjktYjU1Ni00Zjk5LThhYWQtYjZiNzUxMDk2NWQxIiwiZXhwIjoxNjEzNjg1ODI4LCJpZGVudGl0eSI6ImFkbWluIiwiZnJlc2giOmZhbHNlLCJ0eXBlIjoiYWNjZXNzIn0.B2ukLRkZBfZHvcvoNw3MB6r1a5KlPww8Bnz9E6O2Ovo"} Now that we have logged in and have a valid token, we can access the “get_data” route by passing this token as an argument. Since our token is valid, we receive the test payload as a response. # This is the message that we get { "test": ["this","is","a","test"] } We have just successfully created our first REST API for data extraction! Conclusion That’s it for our very brief tutorial on creating our first REST API using Python and the Flask framework. Flask is a beginner friendly micro-framework that will help you build web applications that scale well. Using the steps laid out here, you will be well on your way to creating your own REST API.
https://renier-meyer.medium.com/building-your-first-rest-api-7ab2c3c6e?source=post_internal_links---------3----------------------------
CC-MAIN-2021-21
refinedweb
1,125
54.12
Provided by: libcoin60-doc_3.1.3-2_all NAME SoSearchAction -). SYNOPSIS #include <Inventor/actions/SoSearchAction.h> Inherits SoAction. Public Types enum LookFor { NODE = 1, TYPE = 2, NAME = 4 } enum Interest { FIRST, LAST, ALL } Public Member Functions SoSearchAction (void) virtual ~SoSearchAction (void) void setNode (SoNode *const node) SoNode * getNode (void) const void setType (const SoType type, const SbBool chkderived=TRUE) SoType getType (SbBool &chkderived) const void setName (const SbName name) SbName getName (void) const void setFind (const int what) int getFind (void) const void setInterest (const Interest interest) Interest getInterest (void) const void setSearchingAll (const SbBool searchall) SbBool isSearchingAll (void) const SoPath * getPath (void) const SoPathList & getPaths (void) void reset (void) void setFound (void) SbBool isFound (void) const void addPath (SoPath *const path) Static Public Member Functions static void initClass (void) Static Public Attributes static SbBool duringSearchAll = FALSE Protected Member Functions virtual void beginTraversal (SoNode *node) Detailed Description). When using more than one of the setNode(), setType() and setName() calls, note that the action will search for node(s) with an 'AND' combination of the given search criteria. One of the most common pitfalls when using the SoSearchAction class is to call the function isFound() after doing a search. It does not return what you would expect it to return if you haven't read the documentation for that function. Be aware that if you do search operations on an SoSearchAction created on the stack, you can get some unfortunate side effects if you're not careful. Since SoSearchAction keeps a list of the path(s) found in the latest search, the nodes in these paths will be unref'ed when the SoSearchAction stack instance is destructed at the end of your function. If the root of your scene-graph then has ref-count zero (it is often useful to do a unrefNoDelete() before returning a node from a function to leave the referencing to the caller), the root node will be destructed! It might be better to create a heap instance of the search action in those cases, since you'll then be able to destruct the search action before calling unrefNoDelete(). Another solution would be to call reset() before calling unrefNoDelete() on your object, since reset() truncates the path list. See the documentation of SoTexture2 for a full usage example of SoSearchAction. Member Enumeration Documentation enum SoSearchAction::LookFor Specify the search criterion. This can be a bitwise combination of the available values. enum SoSearchAction::Interest Values used when specifiying what node(s) we are interested in: the first one found, the last one or all of them. Constructor & Destructor Documentation SoSearchAction::SoSearchAction (void) Initializes internal settings with default values. With the default settings, the SoSearchAction will ignore all nodes. SoSearchAction::~SoSearchAction (void) [virtual] Destructor. Member Function Documentation void SoSearchAction::initClass (void) [static] Initializes the run-time type system for this class, and sets up the enabled elements and action method list. Reimplemented from SoAction. void SoSearchAction::setNode (SoNode *constnodeptr) Sets the node pointer to search for. The action will be configured to set the search 'interest' to LookFor NODE, so there is no need to call SoSearchAction::setFind(). SoNode * SoSearchAction::getNode (void) const Returns the node the SoSearchAction instance is configured to search for. Note that this method does not return what was found when you applied the action - it only returns what was specifically set by the user with setNode(). What the action found is returned by getPath() and getPaths(). void SoSearchAction::setType (const SoTypetypearg, const SbBoolchkderivedarg = TRUE) Configures the SoSearchAction instance to search for nodes of the given type, and nodes of classes derived from the given type if chkderived is TRUE. The action will be configured to set the search 'interest' to LookFor TYPE, so there is no need to call SoSearchAction::setFind(). SoType SoSearchAction::getType (SbBool &chkderivedref) const Returns the node type which is searched for, and whether derived classes of that type also returns a match. void SoSearchAction::setName (const SbNamenamearg) Configures the SoSearchAction instance to search for nodes with the given name. The action will be configured to set the search 'interest' to LookFor NAME, so there is no need to call SoSearchAction::setFind(). See also: SoNode::getByName() SbName SoSearchAction::getName (void) const Returns the name the SoSearchAction instance is configured to search for. void SoSearchAction::setFind (const intwhat) Configures what to search for in the scene graph. what is a bitmask of LookFor flags. Default find configuration is to ignore all nodes, but the setFind() configuration is updated automatically when any one of SoSearchAction::setNode(), SoSearchAction::setType() or SoSearchAction::setName() is called. int SoSearchAction::getFind (void) const Returns the search configuration of the action instance. void SoSearchAction::setInterest (const Interestinterestarg) Configures whether only the first, the last, or all the searching matches are of interest. Default configuration is FIRST. SoSearchAction::Interest SoSearchAction::getInterest (void) const Returns whether only the first, the last, or all the searching matches will be saved. void SoSearchAction::setSearchingAll (const SbBoolsearchallarg) Specifies whether normal graph traversal should be done (searchall is FALSE, which is the default setting), or if every single node should be searched (searchall is TRUE). If the searchall flag is TRUE, even nodes considered 'hidden' by other actions are searched (like for instance the disabled children of SoSwitch nodes). SoBaseKit::setSearchingChildren() must be used to search for nodes under node kits. SbBool SoSearchAction::isSearchingAll (void) const Returns the traversal method configuration of the action. SoPath * SoSearchAction::getPath (void) const Returns the path to the node of interest that matched the search criterions. If no match was found, NULL is returned. Note that if ALL matches are of interest, the result of a search action must be fetched through SoSearchAction::getPaths(). There is one frequently asked question about the paths that are returned from either this method or the getPaths() method below: 'why am I not getting the complete path as expected?' Well, then you probably have to cast the path to a SoFullPath, since certain nodes (nodekits, many VRML97 nodes) have hidden children. SoPath::getTail() will return the first node that has hidden children, or the tail if none of the nodes have hidden children. SoFullPath::getTail() will always return the actual tail. Just do like this: SoFullPath * path = (SoFullPath *) searchaction->getPath(); SoVRMLCoordinate * vrmlcord = (SoVRMLCoordinate *) path->getTail(); SoPathList & SoSearchAction::getPaths (void) Returns a pathlist of all nodes that matched the search criterions. Note that if interest were only FIRST or LAST, SoSearchAction::getPath() should be used instead of this method. See also: getPath() void SoSearchAction::reset (void) Resets all the SoSearchAction internals back to their default values. void SoSearchAction::setFound (void) This API member is considered internal to the library, as it is not likely to be of interest to the application programmer. Marks the SoSearchAction instance as terminated. SbBool SoSearchAction::isFound (void) const This API member is considered internal to the library, as it is not likely to be of interest to the application programmer. Returns whether the search action was terminated. Note that this value does not reflect whether the node(s) that was searched for was found or not. Use the result of getPath() / getPaths() if that is what you really are looking for. void SoSearchAction::addPath (SoPath *constpathptr) This API member is considered internal to the library, as it is not likely to be of interest to the application programmer. Sets the path, or adds the path to the path list, depending on the interest configuration. The path is not copied, so it can not be modified after being added without side effects. void SoSearch. Member Data Documentation SbBool SoSearchAction::duringSearchAll = FALSE [static] Obsoleted global flag, only present for compatibility reasons with old SGI / TGS Inventor application code. It's set to TRUE when an SoSearchAction traversal with SoSearchAction::isSearchingAll() equal to TRUE is started, and is reset to FALSE again after traversal has finished. (The flag is used by SGI / TGS Inventor in SoSwitch::affectsState() to know when SoSwitch::whichChild should behave as SoSwitch::SO_SWITCH_ALL. We have a better solution for this problem in Coin.) Author Generated automatically by Doxygen for Coin from the source code.
http://manpages.ubuntu.com/manpages/precise/man3/SoSearchAction.3.html
CC-MAIN-2019-30
refinedweb
1,345
52.29
You can subscribe to this list here. Showing 7 results of 7 At 12:50 PM 8/10/00 -0700, Shannon -jj Behrens wrote: >Chuck, > >We have all agreed that we are tired of duplicating effort. I would >very much like to merge our two products (i.e. Aquarium and WebWare), as >each has a lot of stuff the other doesn't (if you look closely, you'll >find that Aquarium does indeed have stuff that WebWare doesn't). >However, I do have some issues. Please feel free to respond to my >issues: Sounds interesting... Fenster's author, Dan Green, has also expressed a similar interest although I think his product is young enough that he's joining the Webware effort rather than asking for a merger. Perhaps this week is the turning point of some consolidation of the infamous "python web module authors". I could definitely be interested in a merger with Aquarium. As you've noted, there are things to discuss... >The FreeEnergy framework - The FreeEnergy framework is best documented >by FRD for the Java version of FreeEnergy ><>. It has helped up with >numerous clients and is the basis of FreeTrade, an open source ecommerce >framework written in PHP, which I am coauthor of ><>. FreeEnergy has proven to be a >successful approach to Web application development, and I will not very >easily give up such an approach. Furthermore, Aquarium implements the >FreeEnergy framework in such a clean, flexible, OOP manner that it would >be a shame to give it up. Admittedly, I haven't had time to check this out yet. I will, though. >Coding style - I am an admitted anal perfectionist. If you've looked at >my code, perhaps you'll agree that it's extremely clean and well >documented (at least that's my goal). Part of my goal in creating >Aquarium was that the code would be so cleanly coded that other >developers would find it easy to hack the source. Afterall, an aquarium >is a transparent structure of great strength. It yields a great amount >of support, but it doesn't necessarily hide anything. This is the >spirit of Aquarium. I started reading one of the main scripts. I think it was the "engine". I did in fact, notice lots of comments which was nice. However, I also noticed the script was, er, "naked". That is to say it was just a series of statements in a file as opposed to a class with methods. Were it a class, I could subclass it and override a method to customize. As it stands now, I would have to, as you put it, hack the existing source. In the event that my hack is special to my needs or gets rejected for inclusion in the main source, I have to rehack every time I get a new version. That makes me shudder. I didn't get much further than that so I'm not sure to what extent this applies to other parts of the code. As you say, you're big on clean, commented code. I'm also big on objects. It's almost as easy to write as a class as not, and later you'll find that you can subclass, customize, instantiate, cache, etc. to your heart's content often in ways that are eminently useful but weren't originally planned for. "The Object is the Way." Unfortunately, I run into to people who have been scarred by C++ or VB and associate "object" with "bad". I was lucky enough to spend 6 years with Objective-C and NEXTSTEP. >Leadership - Naturally, it would be tough to yield the position of power >that I have as project lead of Aquarium. However, I don't really care >about leading half as much as I care about clean designs and >ultra-clean, well-commented code. In any case, the most successful open >source projects are not stiffled by an over-authoritative leader, rather >they are led by the contributors--i.e. coding merit determines >leadership. Well, there isn't a whole lot more to say here. I do, in fact, consider myself the chief architect for Webware and therefore exercise some authority in terms of design and pushing forward. This isn't different than many other successful OS efforts like Linux, BSD, Emacs, etc. I'm glad to hear that you care so much about ultra-clean, commented code. I do as well, although I occasionally let something slide while we mull over the possibilities. There's always a balance to be struck between the various factors of a project. >Those are my concerns. Please let me say that I found Web Ware to be a >very good project. Furthermore, I hope you are not insulted by anything >I have said. If you are, please accept my apologies. Not insulted. Honest dialogue is what it's going to take to collaborate on anything. Can you shoot us a brief list of where Webware and Aquarium overlap and what Aquarium has that Webware doesn't? BTW if you look through the Webware discussion list archives, I posted a TO DO list just recently. Or I can send it to you.... Also, I've CCed the webware-discuss list. -Chuck At 09:24 AM 8/10/00 -0700, Sam Penrose wrote: >I'm glad Webware exists. When we need more power than we can get from >clustered Apache CGI servers, it will be one of the things we look at. >(I also really appreciate the links to aolserver and PyWX.) I think >what all of you guys are doing is cool, and I hope I can contribute to >it. But I think it's worth noting the gap between the generic >approach and my professional reality. It might also be worth noting >that every generic solution I know of--Zope, ASP--makes me run >screaming. If I'm in my own warped little world here, feel free to >disregard me..:-). Two things: * I wrote Webware in response to my problems with Zope (one of which was being able to bite off a reasonable portion of the system and get started with my project). I'd hate to know that people passed on Webware because of their experience with Zope. I do know people who won't touch anything but C++ now that they've learned it. They're afraid languages like Python will be as much of a pain in the ass. :-) * Ultimately when you write a web app you have to write *something* and often that ends up being CGI scripts. Well, it's just about as easy to write a servlet as it is a CGI script and in doing so you get the generic benefits of WebKit. I guess ultimately I can't really see the point of writing a CGI script instead of a servlet. Heck, you can even run our app server in CGI mode if you don't like the idea of it hanging around. Then later with some configuration you can switch to server mode without changing a line of code. Think of it. You get an order of magnitude performance increase with no changes to your code. In terms of PSP and other components, you can use them if you want or not. Webware is not a monolith. -Chuck FYI- Somewhat similar in concept to webware, and pretty functional. -- David ------------------------------- David Creemer david@... At 09:22 AM 8/9/00 -0700, Sam Penrose wrote: >Yesterday I observed in comp.lang.python that the sheer variety of >these suckers implies a few things: > >1) The creators seem to see more value in rolling their own solution >than in contributing to someone else's. (Is there any project other >than Zope with more than, say, 5 significant contributors? How many >even have 5?) Agreed. But in my particular case, I didn't find a "someone else"s that was sufficiently OOPish, extensible and represented an amount of work I couldn't duplicate in a reasonable amount of time. >2) The real open source project we have in common is Python. The reason >shared contributions are valued less than individual control is that it >is so easy to create these suckers in Python. You can get one up and >running in a few hundred lines of code. Why bother accepting somebody >else's vision when you can realize your own in a couple days of fooling >around? What you really get with a few hundred lines of code is a *taste* of your vision. I doubt that in a few hundred lines of code you will get: * a server * caching * access logs * error logs * exception handling * admin pages * templating * integration of Python and HTML * session management * database connection pooling * fault tolerance * clustering * __maturity__ * etc. Some of these things you might not value to the point of building them by yourself, but you still appreciate having them when they are available. In some cases, you might not even think of certain features, but appreciate them after using them. The problem is that everyone starts from scratch such that we can't really benefit from each other's work without entirely jumping ship. What I'm striving for in Webware is a modular architecture where someone can say "hey, nice app server with logging, caching and such, but I want to roll my own templates". Webware will certainly grow, but it won't ever be a monolith. We already have an extensible architecture with plug-ins and servlet factories. >3) While these systems obviously have value (I'm being paid to develop >one whose budget will eventually climb into 6 figures), their number Can you tell us anything about your system such as it's goals, approach, etc.? Will these be open source, commercial or private. >and idiosyncracy suggests that they are ripe for obliteration by some >new thing that comes along and makes us wonder why we bothered to do >things the current way. I have no idea what that new thing will be, but >I'm thinking about it. I disagree to a large extent. Yes, the new, big thing will come along eventually, but that's how things always work. I don't think the current plethora of python web modules is the symptom of that. With open source in general, it's easier to roll your own and live with your own advantages/disadvantages than to join a fledgling project. In order to gain some real steam, a project has to get to that point where people say "if I roll my own I'm going to miss out on a lot of great things that someone is already providing". I installed Mandrake Linux a few months back. It was my first Linux install. I received 2-5 versions of every kind of application. All the versions had something, but none of them had everything and they pretty much all sucked. I think this is an open source phenomena. It's more fun to roll your own than figure out how to improve someone else's, because you don't have to change the way you think or learn anything new. But it's long term maturity that really makes a product enjoyable to use... It takes A LOT to get "there". Features, performance, documentation, maturity, etc. Webware is up to 16,000 lines of code and we still have a ways to go, but I'm confident we'll get there. My impression is that it's already the strongest package on the list at. -Chuck At 12:42 AM 8/10/00 -0400, Daniel Green wrote: Hi Dan, Welcome to the list. I've been keeping kind of a messy TO DO list which I'm including here. The current plan is to freeze the code Sunday night, crank out docs, test and then release 0.4. I'd recommend for starters, becoming familiar with Webware and helping with the testing of 0.4. After that, you should have an idea of what you think you can and want to work on. Have you been looking at the 0.3 version or the CVS repositor (e.g., pre-0.4)? Here's the list: ------------- WEBWARE TO DO ------------- --- 0.4 --- [ ] Handle unknown session ids with a specific page. [ ] Fix ExamplePage to handle Plug-ins generically. Fix PSP link. [ ] When I do a CVS co on FreeBSD, I get 'cache', but not 'Cache' [ ] Make OneShot executable in the repository [ ] With release 0.4 change SourceForge status to Alpha. [ ] Make other adapters inherit Adapter.py [ ] Disk based sessions --- 0.5 --- [ ] FormKit/ [ ] Page: Add features for style sheets. [ ] Add Last-modified: to generic files that get served up (this problem came up because of images getting reloaded everytime). [ ] bug:! from Marcelo [ ] Review default file type for generic file servlet. Should probably be text, not text/html. [ ] Application.running to something else like Application._running [ ] Number the requests and use the number in the BEGIN REQUEST message [ ] Investigate case insensitive URLs. TO DO: send out a msg to the group on this. support both UNIX and Windows? [ ] Memory leak TO DO: test for this. [ ] Adaptors get raw responses as dictionaries. They should be structured strings instead so that adapters can start using them before they're finished reading. [ ] Try out CGI Wrapper on Windows [ ] AppServer doesn't print anything interesting or informative when it exits (on Windows) [ ] Review name and use of settings [ ] WebKit, FCGIAdaptor: apply WebKitDir feature [ ] Add Sessions/ and save them to disk (according to a setting) [ ] AppServer's shutDown() method does not actually cause a shut down. The sweeper thread of Application hangs around. [ ] Consider adding mxDateTime [ ] Need benchmarks for both OneShot.cgi and WebKit.cgi so that as new releases are made we can monitor performance. [ ] Test with another web server besides Apache. Xitami seems like a good choice. [ ] debug.cgi adapter: print "content-type: text/plain" print import sys sys.stderr = sys.stdout [ ] Review breakdown of methods of Page. [2000-06-10] Split out AppServer into subclasses [2000-06-10] Make special OneShotAppServer for OneShot.cgi. [2000-07-14] Fix address.text to use 127.0.0.1 instead of '' for hostname [2000-07-14] Netscape Nav 4.7 doesn't like the sidebar on admin pages. [2000-07-14] Change host from hardcoded 127.0.0.1 to a setting 'Host' [2000-07-14] "cache/" to "Cache/"; make a dir for each plug-in; Put a _README similar to [2000-08-01] An adapter for. Jay checked this in. [2000-08-02] Review all examples of for use of Page attributes like _session and convert to method calls. [2000-08-02] Consider WebKit.bat|WebKit instead of AppServer.bat|AppServer. 8/2: Stick with AppServer. [2000-08-02] Fix .text extension 2000-08-02: It's really a browser MIME type thing. [2000-08-02] PlugInDirs [2000-08-02] Remove self._session = None in Servlet.py [2000-08-04] KeyValueAccess to NamedValueAccess and put in MiscUtils [2000-08-04] Change SessionTimeout to minutes instead of seconds [2000-08-04] New sweeper method. [2000-08-04] Pass 'transaction' into imported servlet scripts so they can use them in set up. [2000-08-04] Add MiscUtils testing [2000-08-07] Created Adapter, subclass of Configurable. [2000-08-07] OneShot: When capturing output, provide option to word wrap it so the table doesn't get so wide [2000-08-07] Navbar for Examples. SidebarPage [2000-08-07] Review Cans. Discussed with Jay. On hold for now. [ ] - When a servlet is loaded, it is given the global 'transaction' in case it needs to do something sneaky. - Instruct on the use of self.request(), not self._request in subclasses of Page. This is required for session and could leak to breaks from future releases if not applied to the other objects. - Installation guide. Review and enhance, as needed, the release instructions for Webware - Transaction gets passed to loaded scripts - Config files for adapters - Debugging: When using OneShot consider turning on ShowConsole --- 0.6 --- [ ] MiddleKit [ ] Load balancing [ ] Fault tolerance [ ] Expanded regression test suite [ ] Automated regression test suite --- 1.0 --- [ ] Refinements and fixes [ ] More testing [ ] Tutorial -------- Whenever -------- [ ] Consider moving Configurable from WebKit to MiscUtils [ ]. -Chuck
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200008&viewday=10
CC-MAIN-2014-42
refinedweb
2,700
74.39
I'm trying to use SOIL (Simple OpenGL Image Library) to load images in my project. However, it is crashing whenever I call SOIL_load_image, from an unhandled exception writing location 0x00000014. (I'm following this tutorial, btw:). I'm unsure of what I can do to fix this issue, but I think its something related to how I'm setting up the library. Yesterday, I was trying to do something similar with libPNG and was running into a similar crash when the library attempted to do some file IO. At the time, I thought it was probably an issue with the library, but now that I'm getting another crash with a different library, I think it might be me. For the SOIL library, I built the project and copied the debug SOIL.lib to my VC/lib directory, and SOIL.h to the VC/include directory. It seems to be building fine, but is crashing. I tried it with the release .lib as well and the same thing happens. Where I'm trying to load the image (GLTextureFactory.cpp): #include "GLTextureFactory.h" #pragma comment(lib, "SOIL") #include <SOIL.h> #include "GLTexture.h" GLTextureFactory::GLTextureFactory(void) { } GLTextureFactory::~GLTextureFactory(void) { } GLTexture *GLTextureFactory::createTextureForImage(const char *imageFilename) { int width, height; //CRASHING ON THIS LINE unsigned char* image = SOIL_load_image(imageFilename, &width, &height, 0, SOIL_LOAD_RGB); //glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image ); return new GLTexture(0); } #pragma comment(lib, "GLFW") #pragma comment(lib, "opengl32") #pragma comment(lib, "glu32") #pragma comment(lib, "glew32") Edited by scottrick49, 07 February 2013 - 08:04 PM.
http://www.gamedev.net/topic/638450-crash-calling-soil-load-image/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2015-40
refinedweb
260
57.67
This document explains how to use the avl_tree template. Adelson-Velskii and Landis Balanced Binary Search Trees (or AVL Trees) are described in many good textbooks on fundamental data structures. The best web page I’ve been able to find on the topic is “A Visual Basic AVL Tree Container Class”. avl_tree This document, as well as the source code it describes, is in the public domain. To avoid possible confusion about the terms that I use in this document (and in the source comments), here is a summary description of AVL Trees. An AVL Tree is a set of nodes (or elements). Each node is associated with a unique key value. The key values can be ordered from least to greatest. Each node in the tree may (or may not) have a less child node, and it may (or may not) have a greater child node. If node A is a child of node B, then B is the parent of A. If A is the less child of B, A’s key must be less than B’s key. Similarly, if A is the greater child of B, A’s key must be greater than B’s key. All nodes in a tree have exactly one parent, except for the root node, which has no parent. Node A is a descendant of node C if C is A’s parent, or if A’s parent is a descendant of C. If a node is not the root of the entire tree, it is the root of a subtree consisting of the node and all its descendants. The lesser subtree of a node is the subtree whose root is the less child of the node. The greater subtree of a node is the subtree whose root is the greater child of the node. The depth of a node is one more than the depth of its parent. The depth of the root node is 1. The depth of a tree is the maximum node depth. The balance factor of a node is the depth of its greater subtree minus the depth of its lesser subtree, with non-existent subtrees being considered to have a depth of 0. In an AVL tree, the balance factor of any node can only be -1, 0 or 1. There are several open-source C and C++ implementations of AVL Trees available (see “Hot Links”, then “Data Structures” at C/C++ User’s Group Page ). But as far as I know, this is the only one that manipulates the nodes of the tree using abstract “handles” instead of concrete pointers. If all the nodes are in a single array, you can use node indexes as handles instead of node pointers. This approach makes it possible to compress the size of the nodes if memory is tight. Index handles can make tree persistence as simple as writing the node array out with a single disk write, and reading it back in with a single disk read. The template also allows for a tree to be in secondary storage, with nodes being “paged” in and out of memory. To achieve the desired level of abstraction, the avl_tree template uses lots of short inline functions. Because of this, function inlining can significantly improve performance when using the template. If the test suite (test_avl.cpp) is compiled with GNU GCC using level 1 optimization (-O option), it executes twice as fast as when the test suite is compiled without optimization (the default). The template code makes no use of recursion. The implementation is stack-friendly in general, except perhaps for the iter class. Instances of iter contain an array of handles whose dimension is the maximum tree depth minus one. Since key comparisons can potentially be complex, the code avoids repeated comparisons of the same pair of node key values. To avoid clutter, default destructor functions are not documented. iter All of this code compiles with a contemporary version of GNU GCC, and with Visual C++ .NET. To help describe the constraints on template class/typename parameters, or on member types of template class parameters, I like to use reference classes. This doesn’t necessary mean that the type being constrained has to use the reference class as its definition. It is only necessary that every possible usage of the reference class or one of its instances is also a possible usage of the constrained type or one of its instances. When an identifier with the prefix ANY_ is used, this means that all occurrences of that identifier should be substituted with the same type (or with types that implicitly convert to the substituted type). Take for example the function template: template <class A> void foo(A &a) { a.x(a.y()); } The reference class for the parameter A would be: class A { public: void x(ANY_px p); ANY_px y(void); }; The following class could be passed as the class A parameter to the template: struct someA { public: static double x(int aintp); signed char y(bool f = true) const; }; Since the return type of x() is void in the reference class, it can return any type (or be void) in the actual parameter class. y() can return signed char because signed char implicitly converts to int . Member functions can be made static or const because these make the usage of a function more, not less, flexible. x() void y() signed char int static const The avl_tree template is in the abstract_container namespace. The AVL Tree header file also defines this enumerated type: abstract_container enum search_type { EQUAL = 1, LESS = 2, GREATER = 4, LESS_EQUAL = EQUAL | LESS, GREATER_EQUAL = EQUAL | GREATER }; in the abstract_container namespace. The avl_tree template begins with: template <class abstractor, unsigned max_depth = 32> class avl_tree . . . All members of the reference class are public. Each node has to be associated with a node handle, which is a unique value of the handle type. Here is the reference class for handle: class handle { public: // No default value for handles is assumed by the template. handle(void); handle(handle &h); void operator = (handle &h); bool operator == (handle &h); }; Each node has to be associated with a key, which is a unique value of the key type. The difference between a key and a handle is that a node can be conveniently “looked up” using its handle, but it can’t be conveniently looked up using its key. In fact, the whole point of this template is to make it convenient to look up a node given its key. Here is the reference class for key: classkey { public: //Only have to copy it key(key&k); }; The type size is char, short, int or long, signed or unsigned. It must be large enough the hold the maximum possible number of nodes in the tree. char short long signed unsigned handle get_less(handle h, bool access = true); handle get_greater(handle h, bool access = true); Return the handle of the less/greater child of the node whose handle is h. If access is true, and the child node is in secondary storage, it has to be read into memory. If access is false, the child node does not have to be read into memory. Ignore the access parameter if your instantiation makes no use of secondary storage. void set_less(handle h, handle lh); void set_greater(handle h, handle gh); Given the handle h of a node, set the handle of the less/greater child of the node. int get_balance_factor(handle h); Return the balance factor of the node whose handle is h. void set_balance_factor(handle h, int bf); Set the balance factor of the node whose handle is h. The only possible balance factor values are -1, 0 and 1. int compare_key_node(key k, handle h); Compares a key with the key of a node. Returns a negative value if the key is less than the node’s key. Returns zero if the key is the same as the node’s key. Returns a positive value if the key is greater than the node’s key. int compare_node_node(handle h1, handle h2); Compares the keys of two nodes. Returns a negative value if the first node’s key is less than the second node’s key. Returns zero if the first node’s key is the same as the second node’s key. Returns a positive value if the first node’s key is greater than the second node’s key. handle null(void); Always returns the same, invalid handle value, which is called the null value. bool read_error(void); Returns true if there was an error reading secondary storage. If your instantiation of the template makes no use of secondary storage, use this definition: bool read_error(void) { return(false); } abstractor(void); This is the maximum tree depth for an instance of the instantiated class. You almost certainly want to choose the maximum depth based on the maximum number of nodes that could possibly be in the tree instance at any given time. To do this, let the maximum depth be M such that: where MN(d) means the minimum number of nodes in an AVL Tree of depth d. Here is a table of MN(d) values for d from 2 to 45. If, in a particular instantiation, the maximum number of nodes in a tree instance is 1,000,000, the maximum depth should be 28. You pick 28 because MN(28) is 832,039, which is less than or equal to 1,000,000, and MN(29) is 1,346,268, which is strictly greater than 1,000,000. If you insert a node that would cause the tree to grow to a depth greater than the maximum you gave, the results are undefined. Each increase of 1 in the value of max_depth increases the size of an instance of the iter class by sizeof(handle). The only other use of max_depth is as the size of bit arrays used at various places in the code. Generally, the number of bytes in a bit array is the size rounded up to a multiple of the number of bits in an int, and divided by the number of bits in a byte. All this is a roundabout way of saying that, if you don’t use iter instances, you can guiltlessly add a big safety margin to the value of max_depth. max_depth sizeof(handle) Same as handle type member of the abstractor parameter class. Same as key type member of the abstractor parameter class. Same as size type member of the abstractor parameter class. handle insert(handle h); Insert the node with the given handle into the tree. The node must be associated with a key value. The initial values of the node’s less/greater child handles and its balance factor are don’t-cares. If successful, this function returns the handle of the inserted node. If the node to insert has the same key value as a node that’s already in the tree, the insertion is not performed, and the handle of the node already in the tree is returned. Returns the null value if there is an error reading secondary storage. Calling this function invalidates all currently-existing instances of the iter class (that are iterating over this tree). handle search(key k, search_type st = EQUAL); Searches for a particular node in the tree, returning its handle if the node is found, and the null value if the node is not found. The node to search for depends on the value of the st parameter. handle search_least(void); Returns the handle of the node whose key is the minimum of the keys of all the nodes in the tree. Returns the null value if the tree is empty or an error occurs reading from secondary storage. handle search_greatest(void); Returns the handle of the node whose key is the maximum of the keys of all the nodes in the tree. Returns the null value if the tree is empty or an error occurs reading from secondary storage. handle remove(key k); Removes the node with the given k from the tree. Returns the handle of the node removed. Returns the null value if there is no node in the tree with the given key, or an error occurs reading from secondary storage. Calling this function invalidates all currently-existing instances of the iter class (that are iterating over this tree). void purge(void); Removes all nodes from the tree, making it empty. bool is_empty(void); Returns true if the tree is empty. void read_error(void); Returns true if an error occurred while reading a node of the tree from secondary storage. When a read error has occurred, the tree is in an undefined state. avl_tree(void); Initializes the tree to the empty state. template<typename fwd_iter> bool build(fwd_iter p, size num_nodes); Builds a tree from an sequence of nodes that are sorted in ascending order by their key values. The number of nodes in the sequence is given by num_nodes. p is a forward iterator that initially refers to the first node in the sequence. Here is the reference class for the fwd_iter: num_nodes fwd_iter class fwd_iter { public: fwd_iter(fwd_iter &); handle operator * (void); void operator ++ (int); }; Any nodes in the tree (prior to calling this function) are purged. The iterator will be incremented one last time when it refers to the last node in the sequence. build() returns false if a read error occurs while trying to build the tree. The time complexity of this function is O(n x log n), but it is more efficient than inserting the nodes in the sequence one at a time, and the resulting tree will generally have better balance. If the abstractor class has a copy constructor and assignment operator, the avl_tree instantiation will have a (default) copy constructor and assignment operator. Instances of this member class are bi-directional iterators over the ascendingly sorted (by key) sequence of nodes in a tree. The subsections of this section describe the public members of iter iter(void); Initializes the iterator to the null state void start_iter(avl_tree &tree, key k, search_type st = EQUAL); Causes the iterator to refer to a particular node in the tree that is specified as the first parameter. If the particular node cannot be found in the tree, or if a read error occurs, the iterator is put into the null state. The particular node to refer to is determined by the st parameter. void start_iter_least(avl_tree &tree); Cause the iterator to refer to the node with the minimum key in the given tree. Puts the iterator into the null state if the tree is empty or a read error occurs. void start_iter_greatest(avl_tree &tree); Cause the iterator to refer to the node with the maximum key in the given tree. Puts the iterator into the null state if the tree is empty or a read error occurs. handle operator * (void); Returns the handle of the node that the iterator refers to. Returns the null value if the iterator is in the null state void operator ++ (void); void operator ++ (int); Causes the iterator to refer to the node whose key is the next highest after the key of the node the iterator currently refers to. Puts the iterator into the null state if the key of the node currently referred to is the maximum of the keys of all the nodes in the tree, or if a read error occurs. Has no effect if the iterator is already in the null state. void operator -- (void); void operator -- (int); Causes the iterator to refer to the node whose key is the next lowest after the key of the node the iterator currently refers to. Puts the iterator into the null state if the key of the node currently referred to is the minimum of the keys of all the nodes in the tree, or if a read error occurs. Has no effect if the iterator is already in the null state. Returns true if a read error occurred. These member functions exist and can be safely used abstractor abs; handle root; Contains the handle of the root node of the AVL Tree. Contains null value if the tree is empty. The other protected members are most easily understood by reading the source code. 12 Sep 2002 - fixed some problems in the code for handling errors if tree nodes are in secondary str.
http://www.codeproject.com/Articles/2839/C-AVL-Tree-Template?msg=3586635
CC-MAIN-2014-52
refinedweb
2,745
69.01
HowTo talk:Get Banned From Uncyclopedia, the content-free encyclopedia ZOMG this is seriously subversive stuff I have some additional thoughts on how to get banned, but I am afraid of getting banned when I add them to this page. Please advise me on how to proceed. Just for the sake of argument (not trying to get banned), these are my ideas: writing articles in a different language than this one, abudnant tpyos, insisiting that you have imaginary grue for a pet (who's a clever grue then ey? You are, Wendy! Yes! Here, a ground borg cooky for cutest grue in the whole wide worldweb!) and pissing off our Belgian friend Chefke (aka Moneysign). --di Mario 16:02, 23 July 2006 (UTC) - Writing in a different language will get you a {{Qua?}} at most (unless you deliberately recreate it). Typos? I've never seen anyone get banned for typos... Having an imaginary pet grue seems to be a thing for some of the chatters. Don't see why it would be a problem. And pissing off an admin is what gets you banned, yes. But that's because you violate rules and policies. Personal attacks are something I rarely ban for, provided they're not disruptive (as in repeatedly performed). - Now I would like to know what you're playing at, Dimario. I've told you I'd rather not have my real name used online because I prefer to keep the two seperate. Yet now you've gone and put it here... Are you trying to lure me out of my cave and get me to react harshly? --⇔ Sir Mon€¥$ignSTFU F@H|NS|+S 16:20, 23 July 2006 (UTC) - Wait, wasn't Chefke the name of the city you used as a home base in Diablo?--<< >> 16:23, 23 July 2006 (UTC) - Okay, Money, you are true to form. I deliberately put something here that imho was bound to catch your attention and hopefully make you react in a not-amused way. It worked. Mind you, this is the talk page, not the article itself. And since the whole point of the article is letting people know how to get banned, providing this information for them rather is in the spirit of the page's intention. It is a sure-fire way of attracting your ill-disposed attention. Bannination ensured! And before you care to ask, yes, I am messing with someones brain. I'll let you off the hook though, changed the ******* reference to Chefke. In hopes of unjust banning (for others than me), --di Mario 19:09, 23 July 2006 (UTC) <3 “Someday you'll look back on all this and laugh.” I love that I began this article over my frustration with the trials of adminship, but that it has taken on a life of its own now with the humourous contributions of so many authors. I'm at the point now where this is entertaining to read as a third party, despite the fact that I'm still an admin and the content is still true. ~ T. (talk) 16:30, 4 November 2006 (UTC) - Heh. I fully agree. Srsly, we should do something about the way some admins ban over the most trivial things... --User:Nintendorulez 00:09, 5 November 2006 (UTC) - Makes a note of Nintendorulez name for future reference.... -- Sir Mhaille (talk to me) - *RUNS SCREAMING IN TERROR AND CAPS LOCK* --User:Nintendorulez 00:29, 5 November 2006 (UTC) - I'm seroiusly amazed that nobody has listed either "let Famine see you edit something" or "be Nintendorulez". Sir Famine, Gun ♣ Petition » 11/5 03:35 Question Why would you get banned for nominating BENSON for WotM? --Micoolio101 (whine • vandalism) - Ask Famine? --User:Nintendorulez 22:55, 10 November 2006 (UTC) - But why? Isn't he just like any other user? --Micoolio101 (whine • vandalism) 04:22, 11 November 2006 (UTC) TOMPKINS TOMPKINS IS STRAIGHT, AND QUITE A PLEASANT CHAP OMG LOLZ Put... Put this shit on. That is a known subversive site. 65.163.112.153 02:36, 9 February 2007 (UTC) No thank you, we would rather not.--User:Zerotrousers/sig 10:30, 23 February 2007 (UTC) Add... 38. Edit article This page does not exist. -MrBucket 09:55, 18 March 2007 (UTC) - We have that. "23. Create 'This page does not exist'." --AAA! (AAAA) 10:51, 18 March 2007 (UTC) Add this please. Hello, I've just translated this article to portuguese, and I want to ask someone to put [[pt:Deslivros:Como ser banido na Desciclopédia]]. On it. Please. Signed: Diamond Edge 200.139.100.5 23:07, 16 January 2008 (UTC) Add this one too Tell a Admin that they're not cool, helpful, sexy, attractive, nice at all. Tell him/her that "You are full of shit!". Good way to get banned. Secret Agent Man 03:10, 22 November 2008 (UTC) Another addition I made an edit to an article in my namespace, then I realized that I included two very harmless links. One of these was to permaban me, so I protected my edit. Should including such a link be added to the list? An-elite 02:04, 15 July 2009 (UTC) Another suggestion For Admins: Replace MediaWiki:Common.css with that of Wikifur. 24.13.203.96 18:03, 18 August 2009 (UTC)
http://uncyclopedia.wikia.com/wiki/HowTo_talk:Get_Banned?oldid=4058679
CC-MAIN-2016-18
refinedweb
888
74.08
ApplicationWindow: can not set Backgroundcolor to white - lunatic fringe I can change the background color with the color property in the ApplicationWindow. The only color which has no effect is "white". If i use "#FFFFFE" it is like white. Why is "#FFFFFF" ignored as background color ? - SGaist Lifetime Qt Champion Hi, Can you show the code where you set that color ? Also, what OS/Qt combo are you using ? - lunatic fringe It happens on Win 7 with Qt 5.3 changing line 10 to : color : "#FFFFFF" the background changed to a gray color @import QtQuick 2.3 import QtQuick.Controls 1.2 ApplicationWindow { visible: true width: 640 height: 480 title: qsTr("Hello World") color: "#FFFFFE" menuBar: MenuBar { Menu { title: qsTr("File") MenuItem { text: qsTr("&Open") onTriggered: console.log("Open action triggered"); } MenuItem { text: qsTr("Exit") onTriggered: Qt.quit(); } } } Text { text: qsTr("Hello World") anchors.centerIn: parent } } @ - SGaist Lifetime Qt Champion Indeed, looks like there's something strange going on. You should take a look at the "bug report system": to see if something known.
https://forum.qt.io/topic/48527/applicationwindow-can-not-set-backgroundcolor-to-white
CC-MAIN-2018-51
refinedweb
174
68.36
Developer Channel FlashKit.com JavaScript.com JavaScriptSource Developer Jobs ScriptSearch StreamingMediaWorld Web Developer's Journal Web Developer's Virtual Library WebDeveloper.com Webreference Web Hosts XMLfiles.com. Implementing Ajax Components in the JWL Framework Learn how to implement all types of JWL Ajax Components into your projects. (JWL is IBM's JavaScript-based Widget Library.). Transparency in Ajax Applications - Part 2 Beyond the general danger of revealing application logic to potential attackers, there are specific mistakes that programmers make when writing client-side code that can open their applications to attack. Transparency in Ajax Applications An average user might not be aware that the logic of the Ajax application is more exposed than that of the standard Web page. It's relatively simple for an advanced user (or an attacker) to "look inside" and gain knowledge about the internal workings of the application.. Creating an Ajax-Enabled Calendar Control This article, by Scott D. Smith, shows how to use the ASP.NET Ajax framework's UpdatePanel to turn the Calendar control into an Ajax-enabled Calendar control. Universally Related Popup Menus Ajax Edition: Part 3 This week we conclude the series with a line-by-line walk-through of the JavaScript code and describe the server-side classic ASP script code. Universally Related Popup Menus Ajax Edition -. Universally Related Popup Menus AJAX Edition: Part 1 This week we look at a brief overview of AJAX, some relevant JavaScript 1.3 enhancements, how to run the example and usingthe script within your own Web page. Using Multiple JavaScript Onload Functions JavaScripts are usually written to accomplish a given task, such as creating a rotating picture gallery, or to validate a form. For each task, a separate script is necessary. Often, a script is called using an onload function. Using the onload event handler can sometimes be a bit tricky. Check out this easy, fool-proof method for use in your next script. Getting Started with Silverlight - Part 3: Properties In this third and final installment, we look at interacting with the Silverlight control programmatically. Topics covered include: the settings property, the content property and other members. Review: Ajax Starter Kit The term Ajax is often referred to as a new type of technology, but is actually several technologies that work together. How it does so is the subject of this article. Copy and Paste JavaScript with an Ajax Engine It's not necessary to understand the characteristics of Ajax in order to use it. This article provides copy and paste JavaScript with an Ajax engine. No modifications are necessary. Interactive UIs with Microsoft ASP.NET Ajax: Using the UpdatePanel This multi-part article by Scott Mitchell looks at using Microsoft's free ASP.NET AJAX framework to build interactive, Web 2.0 user interfaces. In particular, this installment looks at advanced UpdatePanel usage scenarios. Object-Oriented JavaScript: Part 3: Prototypes This week wraps up our section on Object-Oriented JavaScript with a look at prototypes, the JavaScript execution context, var x, this.x, and x, inheritance using closures and prototypes and more. Object-Oriented JavaScript: Part 2 Not only can JavaScript functions contain other functions, but they can also be instantiated. This makes JavaScript functions a good candidate for implementing the concept of a class from traditional object-oriented programming. Object-Oriented JavaScript This week we cover OOP Object-Oriented Programming) and how it relates to JavaScript. Topics covered include: what encapsulation, inheritance and polymorphism mean, how JavaScript functions work, how to implement inheritance using closures, prototypes Better Fixed GridView Header for ASP.NET Learn how object-based JavaScript can be used to clone and fix the position of a grid header, handling runtime changes in the associated grid's appearance.. ThinWire Handbook: Layout Management ThinWire is an LGPL open source framework that allows you to build responsive, expressive and interactive Web applications without the complexities found with other methods.. Our First Ajax Application This week you'll learn how to construct a complete and working Ajax application. As with any Ajax application, it will make use of an HTML document, JavaScript routines, a server-side routine (in PHP) and a callback function to deal with the returned data.. Talking Web Clients with JavaScript and the Speech API See how to make your web clients read or play content to its users. You will experiment with JavaScript from the command line, learn a JavaScript debugging technique that might be useful, and see how to load the Speech API and ask it to read the ALT (text) attribute of HTML controls. Creating Responsive GUIs with Real-Time Validation This week you'll learn about real-time validation, when and where to inject this functionality into your own applications and how to validate popular data types such as phone numbers, dates and email addresses. Ajax and PHP Part 2: XML Communication/Processing Ajax and PHP 5 both have powerful features for processing and using an XML document. XML is a method of formatting data, often for communication purposes, between different computer systems. In this article, we will show you how to access an XML document with Ajax.. How to Create a JavaScript Animation JavaScript animations aren't difficult to write. Once you learn a few main ideas, you can create complex animations to display in your browser. Additionally, the content will be available to search engines because the content is in machine-readable (X)HTML. Using the DOM class in Ajax with the GWT and Java Learn how to write the Java code necessary to make effective use of the DOM class in the Google Web Toolkit. Ajax and PHP - Part 1: Dynamic HTML and Images So you're interested in Ajax? Ajax is a powerful addition to JavaScript for browser-to-server intercommunication. We will demonstrate a simple script that sends a GET or POST request to a form handling script on a server, then the script will return a response to the browser XMLHttpRequest JavaScript object. JavaScript vs. Flash for Animation When you think of interactive multimedia on the Web, you probably think of Flash. But Flash creates accessiblity problems, especially for those with disabilities. A solution is to use JavaScript, which avoids many of the issues inherent in Flash.. Resolving Conflicts with JavaScript Form Validation Since we published our tip on eliminating duplicate form submissions, I've received a number of e-mails indicating that some users are running into problems where the tip's use of the submit button's OnClick event handler is conflicting with the form's existing client-side form validation. Accessible JavaScripting From the Ground Up As great as it is, JavaScript is probably one of the most commonly abused and overused technologies. In this article, I hope to help you implement JavaScript without tears and guide you in the basics of good scripting practices. Ideas for Webbot Projects This week you'll learn about what you can do with webbots and how they capitalize on browser limitations. You'll learn about creative ideas, exploiting webbots and how to use them to your advantage.. Review: The Book of JavaScript, 2nd Edition For many people, learning JavaScript can be a bit stressful. Through writing The JavaScript Diaries, I've read and reviewed many books. Some are good and others are a "bit tedious." A few are excellent. The Book of JavaScript falls in the latter category. JavaScript Language Essentials This week you'll learn about the basic elements of JavaScript such as loops, arrays and functions. You'll see how you can use JavaScript to write your Web pages for you, how JavaScript handles user errors and much more. Doodle: Part 4 - More Power to the User In previous issues, I kept the user experience somewhat spartan which allowed the application to be modified with a minimum of effort. This time we add points, ellipses, bezier curves and look at a new application class called Control, which allows users to tell the Doodle application what shape they want to draw next.. Consuming Membership and Profile Services via ASP.NET Ajax You can consume ASP.NET 2.0 application servicessuch as Membership, Roles, and Profilesfrom client-side JavaScript code with ASP.NET Ajax.. Using Variables and Built-in Functions to Update Your Web Pages Automatically With JavaScript you can update the content on your pages automatically - every day, every hour, or every second. Here, you'll learn about a simple script that automatically changes the date on your web page. Developing a Ajax-driven Shopping Cart with PHP and Prototype Create an Ajax-enabled shopping cart in very few lines of code, thanks to the power of PHP and the Prototype JavaScript library.. Make Your Site Script.aculo.us Discover Script.aculo.us, a client-side framework that gives developers a new way to code in JavaScript by providing new shortcut functions, new powerful objects including Form, Effect, Control and Ajax, and some custom widgets.. Switchy McLayout This script and CSS combination allows you to define the dimensions, information richness, and appearance of your content objects for set ranges of screen sizes. Forms Validation with Symfony and Prototype Looking for a new framework solution? Try symfony. Specifically, learn how symfony's built-in support for the Prototype JavaScript framework can greatly enhance the user experience within your applications.. Ajax Edit in Place Using Prototype In this article I will show you how to use the popular prototype JavaScript include file along with an Ajax technique called Edit in Place. Edit in Place uses the XmlHttpRequest Object to call an external page to send and receive information. Doodle, A Demo Drawing Program: Part 3 In this installment, a solid programming style has been imposed, a JavaScript class model developed for Preezo which supports single inheritance and simple, tightly bound class definition code. This is important when working with large applications. The Twelve Days of Ajax Ajax has been jingling all over the place over the last year or so. Vlad takes a look at twelve Ajax frameworks that might be worth more than a day of your time. Database-enabled Ajax with PHP Ajax has taken the Web to a new level by offering an intuitive interactive model that rivals the desktop. To compete with desktop applications, you'll learn how to create database-enabled Ajax requests using PHP and MySQL. PPK on JavaScript: The DOM - Text Nodes, Node Lists and Forms. PPK on JavaScript: The DOM - Elements, innerHTML, and Attributes The W3C DOM allows you to create your own elements and text nodes, and add them to the document tree. This week we look at creating and cloning elements, innerHTML, and attributes. PPK on JavaScript: The DOM - Part 1 In 1998, the W3C published its Level 1 DOM specification, which all browser vendors implemented. This week we're going to spend most of our time working with the Level 1 DOM, but we'll also take a look at the old Level 0 DOM, especially at its useful form-field properties. RSS and Atom in Action: Newsfeed Formats - Atom In 2003, a group of bloggers came together to create a new standard, which would later be known as Atom. They wanted to start fresh and do things right this time. As a result, all the major blog servers either support Atom now or have plans to do so. To learn more, read on. Understanding Ajax: Part 2 - Iframes, Cookies, and More This week we continue with our exploration of Ajax. Topics covered are sending a request using an IFrame, creating a hidden IFrame, creating a form, sending a request using a cookie and more. The JavaScript Diaries: Part 15 - The Date Object With JavaScript you can display and manipulate the date and time. You can calculate the days between dates, show new items on your Web site, etc. In this installment we'll take a look at the JavaScript Date() object and learn how to utilize it in our scripts. Prevent Users from Submitting a Form Twice This article, by Scott Mitchell, looks at two JavaScript techniques that can be employed to prevent users from submitting a form multiple times.). Fiddler Can Make Debugging Easy Many issues remain undiscovered in Web sites because browsers are designed to allow things to continue to work even when they're not quite right. Discover how Fiddle can be used to expose common problems that may be in your sites. JavaScript and XML This week you'll learn how to use JavaScript to work with XML data. Topics convered are obtaining XML documents, loading a document from the network, parsing XML text, XML documents from data islands and manipulating XML with the DOM API. How to Create a Color Picker in JavaScript This week you're going to learn how to create a Color Picker similar to the one used in Photoshop, but entirely in JavaScript. JavaScript Basics: Part 13 - Browser Compatibility Browser compatibility is one of the biggest issues facing Web developers, and has been since the original browser wars. This article is going to highlight browser compatibility issues and tell you how to avoid or simply deal with them. JavaScript Basics: Part 12 - The Concept of Recursion 'To understand recursion you must first understand recursion'. That's my favorite quote related to programming because it so beautifully captures what recursion is. This article takes you further into the concept of recursion and shows you how it's used in JavaScript.. Review: Ajax in 10 Minutes For Web developers who want to add more interactivity to their Web sites, this book is packed with information. It's well written, but if you're a novice additional programming background is recommended. The JavaScript Diaries: Part 14 - The Math Object This installment looks at the Math object, a JavaScript object used to perform mathematical operations such as obtaining the values of predefined mathematical constants. It can also be used to generate random numbers. Ajax with the ZK Framework Discover how to utilize the ZK Ajax framework to develop web based Java applications with the look and functionality of desktop applications. JavaScript Basics: Part 11 - Error Handling You've written a JavaScript application and it's working fine--until you get an error message. It's something unsightly that pops up on your screen, something like 'myObject.fields is null or not an object'. This article is going to show you how to account for errors and show you several different methods for general error handling. Javascript Basics: Part 10 - The Ajax Technology. How to Use the HTTP Protocol Various protocols are used for communication over the Web, perhaps the most important being HTTP, which is also fundamental to Ajax applications. This week you'll learn about the HTTP protocol and how it's used to request and receive information. Javascript Basics: Part 9 - Inheritance Our last article in this series covered the basics of object-oriented programming in JavaScript. This article is going to continue on that path, teaching you methods of inheritance as well as the usefulness (and dangers) of closures.. Object Oriented Javascript: Part 2 - Strings, Dates, and Arrays In the previous article you learned how JavaScript classes could be written to inherit the methods and properties of another class. In this article you'll learn about some extensions that will allow developers to extend intrinsic JavaScript classes such as String, Date and Array.! Design Patterns in JavaScript: Part 1 Design patterns are programming solutions to a specific problem that has been documented so that the developer doesn't need to solve the same problem again. This week, we begin a series of articles that will explore implementing several popular design patterns in JavaScript. JavaScript Basics: Part 8 - Object Oriented Design. Using XML, A PHP Developer's Primer, Part 4: XML-RPC, PHP and JavaScript In this article we will demonstrate how PHP can be used to call upon Web services provided by third part sites via an XML-RPC server. We will also show you how to create your own XML-RPC and use client-side JavaScript to invoke procedures in your PHP scripts. How to Drag and Drop Using JavaScript JavaScript excels at Modifying the DOM of a Web page, but we usually only do simple things with it, such as creating image rollovers, making tabs, etc. This week you're going to learn how to create items on your page that you can drag and drop. JavaScript Basics: Part 7 - Document and Window Objects Primer Number 6: The DOM This is the sixth in a series that will introduce you to the JavaScript language. In this article we will be talking about the Document Object Model, otherwise known as the DOM. The JavaScript Diaries: Part 13 - Array Properties and Methods Now that we know about the different types of arrays, we'll learn how to manipulate them in order to make them more functional. This week we'll look at the properties and methods that are commonly used for most coding situations.. JavaScript Primer Number 5: Strings, Numbers and Arrays Up until now we have been focusing on the language constructs of JavaScript: if statements, loops, functions, etc. In this primer we are going to take a step back and cover the inner workings of some of the native JavaScript objects: Strings, Numbers and Arrays.. JavaScript Primer Number 4: Functions and Objects In the last article, we covered the basics on how to work with form fields and delved into functions a bit more. In this article we will fully explain functions and introduce the concepts of objects in JavaScript. DOM Scripting: Unobtrusive JavaScript at Its Best In the world of JavaScript programming, the language is especially powerful when interacting with Web pages using the Document Object Model (DOM). If you've ever wanted to learn the basics of DOM scripting, have a look at this review. JavaScript Primer Number 3 In the last article we covered if-else statements, some basic validation and functions. This time we'll discuss how to use JavaScript to validate a form on your page, and we'll learn about form fields and loops in the process! Placing JavaScripts in External Files It seems that every few days, someone wants to know how to place their JavaScripts in an external file. In this article we'll take a look at the reasons for placing scripts in an external file and how to go about doing it. How to Auto Include a JavaScript File Many developers have a large library of JavaScript code at their fingertips that they developed, their collegues developed, or that they've pieced together. If you've ever wanted to easily find any JavaScript file this article will show you how. JavaScript Primer Number 2 In the last article, we wrote a simple script that involved variables, alerts, prompts, string concatenation and some basic arithmatic operators. In this week's installment, we learn all about the 'if' and 'else' statements. JavaScript Primer Number 1 This new JavaScript Primer is the first in a series of articles by our new author Mark Kahn. This article explains what JavaScript is, and shows you how to create your first script! Object Oriented Javascript: Part 1 - Class Inheritance To many object orientation purists, a programming language doesn't cut the mustard unless it supports some form of class inheritance, where one class can 'inherit' the behaviour of another class. This week, you'll learn how JavaScript can support class inheritance for user defined classes.. The JavaScript Diaries: Part 12 - Multiple Array Types This installment takes a look at multidimensional and associative arrays. As you learn to use these you will begin to understand how they can be incorporated into your Web sites? JavaScript Standard Template Library: Part 4 - The stdext Namespace In previous articles you learned about a programming library based on the C++ Standard Template Library. This week we look at four new collections from the stdext namespace and you'll see how easy it is to interchange the collections within a simple JavaScript unit test. Top Ten Reasons AJAX is Here to Stay This topic may not make the Letterman show but it is on the mind of many developers these days. The JavaScript Diaries: Part 11 In this installment we take a look at JavaScript arrays, which can be very useful in creating different types of scripts. This study will span several installments due to their complexity. Zero Day Exploit Hits IE Malicious code targets a flaw in the browser's JavaScript handling. Developing Web Applications with Ajax: Part 3 This week we'll learn how to use Ajax in conjunction with server-side processing and how these technologies can produce powerful Web applications. For this article I use PHP, but Ajax is compatible with any server-side language. Ajax in Action. Chapter 6: The User Experience Asynchronous JavaScript and XML, known as "Ajax," is new way of thinking that can result in a flowing and intuitive interaction with the user. Ajax in Action explains how to distribute the application between the client and the server while retaining the integrity of the system. By Manning Publications. Measuring the Benefits of Ajax Ajax, popularized by Google (Gmail and Google Maps), Yahoo (Flickr), and Amazon (A9 Search) give Web apps a desktop experience. However, businesses want to understand what the applicability is to the bottom line. A comparison between a traditional Web application and an Ajax one shows that dramatic quantifiable cost savings can be measured when looking at specific application metrics. How to Develop Web Applications with Ajax: Part 2 In part one of this series, we discussed how to retrieve remote data from an XML file via JavaScript. This week, we'll process that data in a more complex manner. As an example, we'll take groups of XML data, separate individual segments and display those segments in different ways, depending on how they're identified. Implementing Remote Calling Without Using AJAX. The JavaScript Standard Template Library: Part 3. The JavaScript Diaries: Part 9 In this installment, we'll wrap up our study of the window object by learning how to use the most common of the window event handlers. We'll also take a look at some links for additional help. The JavaScript STL (Standard Template Library): Part 2 This week, we conclude our two-part series with information on the list collection, the vector collection and the deque collection. The article wraps up with a demo and an opportunity to download the code. The JavaScript STL (Standard Template Library): Part 1 One of the obstacles that programmers encounter is that each language has its own "culture" and ways of getting things done. In the C++ programming language a solution has been developed, known as the Standard Template Library or STL for short. Here's a JavaScript implementation of the STL. begin to implement JavaScript in your Web sites. AJAX: Asynchronous Java + XML? Discover the world of AJAX, the generic application model that can enable more interactive, more responsive, and smarter Web applications. The Return of AJAX? Some relatively new Google applications have prompted renewed interest in a programming technique that is years old. Mozilla's New Array Methods When the next version of Firefox is released later this year, it will be sporting several updates to its JavaScript engine. Part of this JavaScript upgrade directly affects the Array object, which gains several new methods. How to Develop Web Applications with Ajax: Part 1 In the past, Web applications were limited because a Web page had to be reloaded (or another page loaded in its place) in order for new data to be obtained. Recently, a new method, known as "Ajax" (Asynchronous Javascript and XML applications) asynchronously retrieves XML data via JavaScript. Ajax will allow you to take your Web applications to the next level.. Professional JavaScript for Web Developers: JavaScript in the Browser - Part 2 This week, we continue our exploration. A sampling of topics covered here are navigating and opening new windows, system dialogs, intervals and timeouts, the document object, the location object and more. By WROX Press. Book Excerpt: Professional JavaScript for Web Developers - Part 1 Web browsers have come a long way over the years and can now handle a variety of file formats, not just conventional HTML. Here, you'll learn how JavaScript fits into HTML, other languages, and some basic concepts of the Browser Object Model (BOM). Create Universally Related Popup Menus: Single Form Version 3 In this article, the author modifies Andy King's original version of the Universally Related Popup Menus (URPMs). His intention was to make it more suitable for submitting data to a server and to simplify the JavaScript "O" objects used to store all the related list data. Read on to see how he did it. The JavaScript Diaries: Part 5 This week, as we continue our quest to learn the JavaScript language, we'll look at conditional statements and loops. These can help us to add more depth and complexity to our scripts. By Lee Underwood. Using JavaScript Components in Java Studio Creator Discover how you can draw on the wealth of ready-made JavaScript Components and Libraries within Java Studio Creator to create a richer and more complete user interface experience. Implementing AJAX Using ASP.NET 1.1 AJAX is an acronym that stands for Asynchronous JavaScript and XML. AJAX's strong point is that it allows data on a page to be dynamically updated without the browser having to reload the page. This article offers a brief introduction and description of AJAX and then provides some sample code illustrating its usage. Creating an Autosuggest Textbox with JavaScript: Part 3 In this installment week you'll learn how to complete the modifications, make your suggestions case insensitive and get the suggestions back from the server instead of using client-side information. By Nicholas C. Zakas.. By Lee Underwood Core JavaScript Reference: Version 1.5 This downloadable book (in HTML format) is a reference manual for the core JavaScript language, version 1.5. Written by the developers at Netscape Communications. An excellent resource. The JavaScript Diaries: Part 2 In the first installment, we looked at some general information and guidelines to help prepare us for our study of JavaScript. This week, we delve into parts of the language and we'll also write our first script. Core JavaScript Guide: Version 1.5 This downloadable book (in HTML format) explains everything you need to know about using core JavaScript. Written by the developers at Netscape Communications, it is an excellent resource. The JavaScript Diaries: Part 1 JavaScript is a versatile language which can be used to create menus, validate forms, provide interactive calendars, post the current day's headlines, track a visitor's history on your site and much more. This week is part one of an ongoing series on the process of learning JavaScript. Creating an Autosuggest Textbox with JavaScript: Part 2 In the first part of this series, you learned how to create type ahead functionality in a textbox, which presents the user with a single suggestion for what they've already typed. This article builds upon that functionality by adding a dropdown list of multiple suggestions. By Nicholas C. Zakas. How to Populate Fields from New Windows Using JavaScript Occasionally, filling out Web page forms can be daunting. Fortunately, some forms display a question mark next to the form field, which opens a popup window containing additional information. This week, you'll learn how to enhance the functionality of those windows. By Jonathan Fenocchi. Creating an Autosuggest Textbox with JavaScript: Part 1 One of Google's new applications is Google Suggest. As you type, Google suggests search terms that come up with results. While not a new implementation, it's quickly becoming popular among developers. This week, you'll learn how to build an autosuggest control one step at a time. By Nicholas C. Zakas. JavaScript and Accessibility: Part 3 Today, you'll learn about fixes and creative options for Drop-down Navigation Selections and DHTML Menus. Other topics covered are proprietary alternatives, document.all and innerHTML. By Jonathan Fenocchi. JavaScript and Accessibility: Part 2 Last week we began this series with a discussion about new practical and standards-compliant use of JavaScript. We also covered some classical techniques and how to fix them. We continue that process this week, where we look at form validation and rollovers. How to Create a WYSIWYG Rich Text Editor in JavaScript: Part 2 Today, you'll learn about different methods for handling keyboard input and the RichEdit control. Keyboard input is handled by two functions; onKeyPress() and onKeyDown(). The the RichEdit constructor creates the necessary HTML elements to display the control. By Guyon Roche. How to Create a WYSIWYG Rich Text Editor in JavaScript: Part 1 On forums, there are many features that allow users to compose richly formatted text with animated emoticons, etc., but you can't see what the finished work will look like until it's posted or unless a preview is generated. Here, author Guyon Roche uses Javascript to show you how your text will look as you write it. Easy Table Reading in JavaScript Readability is a key factor on any web site as it enhances the user experience. This article focuses on taking advantage of JavaScript's style-alternating capabilities and event-handlers to make any dynamic web page easier to read in a few seconds. Stereoscopic Vision in JavaScript Remember when stereoscopic images were all the rage? Crowds of people would gather around printed images of a random grouping of colored dots, hoping that a 3D scene would emerge from the chaos. Today, you'll learn how these images actually work and how a simple JavaScript program can dynamically generate just such an image. How to Use JavaScript ++ One of the problems facing JavaScript developers is that the core features are only supported in the latest browsers. In this article, users will learn how to use JavaScript to solve this problem. JavaScript Popup Media Player Playing videos in popup windows is often a simple task, but when you end up with multiple pages for each individual video, things can quickly become complicated. We can clean that up with JavaScript, and still make it accessible to users who don't have JavaScript.. JavaScript Programming: An Introduction to dltypeof() To fully appreciate the functionality of dltypeof(), you should be familiar with the JS operator it attempts to replace, typeof(), and the frustrations associated with the limitations of the typeof() operator. The History of JavaScript and Databases JavaScript and databases have a mixed history, due in part to JavaScript's success. But Web security provisions restricts the ability of a Web program to read (and write) local files. Here are some options to change that. By Jacques Surveyor. The Irony of JavaScript's Success JavaScript has had a roller coaster ride of early phenomenal success, then some bruisings and now renewed success as a macro language, but this recent success may be the highpoint for JavaScript. JavaScript Synchronized Frame Scrolling: Part 3: Putting It All Together In the final chapter of this series, we synchronize the verticaland horizontal frames. We place code from the previous articles into one so that both frames will scroll in both directions: horizontally and vertically. JavaScript Trends: Mixed Signals Given the popularity of the Web and the role JavaScript plays in Web development, one would expect JavaScript to be ranked in the top ten programming languages. But if you take the measure of JavaScript in 2004 you get mixed signals. How to Create a JavaScript Web Page Screen Saver This JavaScript tutorial shows you how to implement a screen saver on a web page. It activates after a timeout, hides the current content of the page, creates an attractive display and more. By Guyon Roche. JavaScript Synchronized Frame Scrolling, Pt. 2: Horizontal Scrolling In part one of this series, we looked at how to synchronize two vertically scrolling frames. Today, we'll look at how to make these frames scroll horizontally. This is useful for horizontally oriented sites, such as those designed with Flash or CSS. By Jonathan Fenocchi. Creating a Textbox with JavaScript Auto-Complete As a user types in new values using the Auto-Complete feature of Internet Explorer, it maintains a list of values that the user has entered. But there are some limitations. These are overcome in this article, using JavaScript. A Scrolling Grid A problem often encountered in web design is condensing large tables of data into a standard 800x600 web page. In this article you'll Iearn how to use JavaScript to render any amount of table data into a small grid. By Guyon Roche. JavaScript Image Preloader One of JavaScript's greatest strengths is the generation of HTML code on the fly. One of the hurdles to overcome when generating HTML is to ensure that any images referenced using image tags are properly loaded. By Guyon Roche. Ad-Rotation in JavaScript Ad-Rotation is widespread and important for many sites, such as those that offer free services. In this article, you'll learn how to generate random advertisements in JavaScript, and explore some other features along the way. By Jonathan Fenocchi. Compact. JavaScript Core Objects. Pt. 2 This week, author Ellie Quigley covers the Wrapper, String, Number, Boolean and Function Objects. These core objects are consistent across different implementations and platforms and have been standardized by the ECMAScript 1.0 specification, allowing programs to be portable. By Prentice Hall.. Form Elements Overlapping A Styled Layer In this article, author Khalid Ali discusses a problem with form elements and how they can overlap styled elements. Here, he presents a workaround for it, using JavaScript. Creating a Server Control for JavaScript Testing Learn how to create an ASP.NET server control that detects if JavaScript is supported AND enabled in a user's browser. Object Detection in the New Browser Age Guest author Eddie Traversa argues the case in favor of object-based browser detection in JavaScript. The trick, of course, is knowing which unique objects are supported within each browser and platform. Creating Unique Automatic JavaScript Objects Need to create a JavaScript object, and ensure that one--and only one--instance is available to your scripts? Guest author Philip Chalmers discusses his technique for creating single JavaScript object instances that can be accessed from multiple scripts. IBuySpy, Part I: Installing Time now to see how JScript .NET and ASP.NET can be used together in a complete e-commerce application. Follow along as our own 007 (a.k.a, "Doc JavaScript") begins a new series that shows you how to convert the C#/VB based IBuySpy storefront into a JScript .NET implementation. By Yehuda Shiran and Tomer Shiran. Book Excerpt: Beyond HTML Goodies Goodies is as goodies does--or something like that. Our excerpt from chapter 6 of "Beyond HTML Goodies" runs through a collection of basic JavaScript-based form tricks, including auto submitting a form and select box-based navigation. From Que. JScript .NET, Part VII: Web Services and ASP.NET The Doc is back with more server-side services, this time utilizing ASP.NET. Just as JavaScript is the perfect helper for HTML on the client side, JScript .NET complements ASP.NET on the server. By Yehuda Shiran and Tomer Shiran. JScript .NET, Part V: Polymorphism Perhaps more difficult to pronounce then understand, polymorphism is a key feature of any object oriented language; and, as Doc Javascript explains, can be implemented in JScript .NET using both methods and interfaces. By Yehuda Shiran and Tomer Shiran. JScript .NET, Part IV: Inheritance Contrary to Mom's instructions, you don't always have to share your toys. With the right keywords, JScript .NET lets you ensure that your base methods cannot be replaced. JavaScript for Non-programmers It's possible to use JavaScript in your Web pages without spending months at night-school learning how the language works. Some scripts are plug and play - drop them in the right place on your HTML page and they'll work straight away. Others need only a small amount of customization to meet your needs. Here we take a look at JavaScript from a non-programmers point of view Practical JavaScript for the Usable Web This is a new kind of JavaScript book. It's not cut'n'paste, it's not a reference, and it's not an exhaustive investigation of the JavaScript language. It is about client-side, web-focused, and task-oriented JavaScript. JScript .NET, Part II: Major Features JScript .NET includes some features that are common to JavaScript, and other features more common to C++ or other programming languages. Doc's examination continues with a look at the definition of typed data, enumerated, and class-based variables. By Yehuda Shiran and Tomer Shiran. JScript .NET, Part I: The Mechanics While consuming a Web service is most commonly done by the client, dishing them out is a job for the server. Take a walk on the server-side, and learn the basics of Microsoft's latest ECMAScript based language: JScript .NET. By Yehuda Shiran and Tomer Shiran. JavaScript Design - Chapter 17 JavaScript Design shows designers how to create interactive JavaScript applications for the Web. This excerpt discusses Working with XML and JavaScript. From New Riders Publishing. Creating a Cross-Browser (DOM) Expandable Tree Using DOM-based JavaScript techniques, guest author Nicholas C. Zakas teaches you how to create an expandable/collapsible navigation component for your Web pages. And lo and behold: the same script works identically in both IE5+ and Netscape 6.1+ - no browser specific branching required. Book Excerpt: Detecting Plug-ins and Operating Systems JavaScript can tell you a lot about a surfer's environment, provided you know where to look. Our final installment from "Designing with JavaScript" shows you how to check for installed plug-ins and operating systems. From O'Reilly. Designing with JavaScript, 2nd Edition WebRef offers you sample chapters from the book Designing with JavaScript, 2nd Edition. Before you learn how JavaScript differentiates between one browser and another, you need to understand how JavaScript gets information from the browser. Designing with JavaScript, 2nd Edition - Part 3 This third and final installment in this series covers document properties, objects, properties, methods, and time shifts. Designing with JavaScript shows you how to create the effects you want, without forcing you to wade through pages of dry programmer-speak about variables, operators, and functions. Designing with JavaScript, 2nd Edition: Part 2 In this second installment we look at the script tag and displaying the page. Designing with JavaScript shows you how to create the effects you want, without forcing you to wade through pages of dry programmer-speak about variables, operators, and functions. Book Excerpt: JavaScript: The Definitive Guide Now in its fourth edition, this classic reference from O'Reilly covers JavaScript from A to Z, including a full language reference and syntax tutorials. Our excerpt discusses the W3C's Document Object Model: how it is applied to HTML documents, and browser specific implications. From O'Reilly & Associates. Professional JavaScript: Updating the Ticker The conclusion of our two part mini-series from the Wrox Press title "Professional JavaScript" brings the BBC online ticker up to date. Integrating external data feeds, CSS interactions, and ticker implementation are some of the key topics covered. Book Excerpt: Professional JavaScript 2nd Edition What does it take to rework old JavaScript code to accommodate today's new browsers and standards? This two part excerpt discusses a real-world example of the process, from initial examination to final deployment. From Wrox Press. Embedding Movies with Flash, Part I: Basic Methods Audio clips aren't the only kind of Flash objects you can control with JavaScript. Today the good Doctor begins a new series that shows you how to interact with Flash movies in your Web pages. By Yehuda Shiran. How-To Tutorials Tools
http://www.webdeveloper.com/javascript/
crawl-001
refinedweb
6,667
62.17
I have been putting together some communications object blocks with floating point data interfaces. This format of Audio objects has been discussed on this forum, and documented by Chip Audette in his OpenAudio_ArduinoLibrary library and the Tympan library. It occurred to me that one of the blocks might have application in the 16-bit audio applications and so I converted it to have a fixed point Q15 interface, the same as the Teensy Audio Library. This note is just about the 16-bit integer version and makes this block available for such use. The AudioFilterEqualizer designs, analyzes and applies a general FIR-filter-based multi-band equalizer. The number of bands is essentially unlimited (51 now) and the frequency limits of the bands are arbitrary. The relative gain for each band is set by specifying a dB level. The Fourier transform of this multi-band response is windowed by a Kaiser window that allows a side-lobe specified parameter to determine the trade-off between rate of transition between bands and the side-lobe levels for a steep transition. The number of coefficients (taps) of the FIR filter is variable from 4 to 250. Many details are covered in the introductory material to the .h file listed below. In summary, the equalizer object is instantiated as an AudioFilterEqualizer object with a single input and output. It needs no 'begin' function as it comes up running a 4-tap pass-through filter. The equalizer details are entered by a equalizerNew() function that can also be called on-the-fly to change settings. The equalizer is specified by a pair of float arrays supplied by the .INO. Likewise, an int16_t array is .INO supplied to hold the FIR coefficients. An equalizerResponse() function gives back the dB response that is actually achieved, including quantization effects. A float32_t array to hold these is again .INO supplied This block is more general than most graphic equalizers, in that the the lower attenuation level is not limited to -12 dB. This allows standard LP, HP, BP and BR responses to be included. The biggest limitation in using the equalizer is the low frequency response. This is limited by the number of IR taps. At a sample rate of 44.1 kHz the 200 tap response below 100 or 200 Hz is in the right direction, but somewhat approximate.There are ways to improve this, but it remains a factor. That said, for communications applications where the lowest frequency of interest is around 200 Hz, it is not a problem. The internally designed FIR filter is symmetric which means that the delay through the filter is constant for all frequencies (linear phase). Here is the include AudioFilterEqualizer_I16.h file, with supplementary info. Next is the AudioFilterEqualizer_I16.cpp file.Next is the AudioFilterEqualizer_I16.cpp file.Code:/* * AudioFilterEqualizer_I16 * * Created: Bob Larkin W7PUA 14 May 2020 * * This is a direct translation of the receiver audio equalizer written * by this author for the open-source DSP-10 radio in 1999. See * and * * This version processes blocks of 16-bit integer audio (as opposed to * the Chip Audette style F32 floating point baudio.) * * Credit and thanks to PJRC, Paul Stoffregen and Teensy products for the audio * system and library that this is built upon as well as the float32 * work of Chip Audette embodied in the OpenAudio_ArduinoLibrary. Many thanks * for the library structures and wonderful Teensy products. * * This equalizer is specified by an array of 'nBands' frequency bands * each of of arbitrary frequency span. The first band always starts at * 0.0 Hz, and that value is not entered. Each band is specified by the upper * frequency limit to the band. * The last band always ends at half of the sample frequency, which for 44117 Hz * sample frequency would be 22058.5. Each band is specified by its upper * frequency in an .INO supplied array feq[]. The dB level of that band is * specified by a value, in dB, arranged in an .INO supplied array * aeq[]. Thus a trivial bass/treble control might look like: * nBands = 3; * feq[] = {300.0, 1500.0, 22058.5}; * float32_t bass = -2.5; // in dB, relative to anything * float32_t treble = 6.0; * aeq[] = {bass, 0.0, treble}; * * It may be obvious that this equalizer is a more general case of the common * functions such as low-pass, band-pass, notch, etc. For instance, a pair * of band pass filters would look like: * nBands = 5; * feq[] = {500.0, 700.0, 2000.0, 2200.0, 22058.5}; * aeq[] = {-100.0, 0.0, -100.0, 2.0, -100.0}; * where we added 2 dB of gain to the 2200 to 2400 Hz filter, relative to the 500 * to 700 Hz band. * * An octave band equalizer is made by starting at some low frequency, say 40 Hz for the * first band. The lowest frequency band will be from 0.0 Hz up to that first frequency. * Next multiply the first frequency by 2, creating in our example, a band from 40.0 * to 80 Hz. This is continued until the last frequency is about 22058 Hz. * This works out to require 10 bands, as follows: * nBands = 10; * feq[] = { 40.0, 80.0, 160.0, 320.0, 640.0, 1280.0, 2560.0, 5120.0, 10240.0, 22058.5}; * aeq[] = { 5.0, 4.0, 2.0, -3.0, -4.0, -1.0, 3.0, 6.0, 3.0, 0.5 }; * * For a "half octave" equalizer, multiply each upper band limit by the square root of 2 = 1.414 * to get the next band limit. For that case, feq[] would start with a sequence * like 40, 56.56, 80.00, 113.1, 160.0, ... for a total of about 20 bands. * * How well all of this is achieved depends on the number of FIR coefficients * being used. In the Teensy 3.6 / 4.0 the resourses allow a hefty number, * say 201, of coefficients to be used without stealing all the processor time * (see Timing, below). The coefficient and FIR memory is sized for a maximum of * 250 coefficients, but can be recompiled for bigger with the define FIR_MAX_COEFFS. * To simplify calculations, the number of FIR coefficients should be odd. If not * odd, the number will be reduced by one, quietly. * * If you try to make the bands too narrow for the number of FIR coeffficients, * the approximation to the desired curve becomes poor. This can all be evaluated * by the function getResponse(nPoints, pResponse) which fills an .INO-supplied array * pResponse[nPoints] with the frequency response of the equalizer in dB. The nPoints * are spread evenly between 0.0 and half of the sample frequency. * * Initialization is a 2-step process. This makes it practical to change equalizer * levels on-the-fly. The constructor starts up with a 4-tap FIR setup for direct * pass through. Then the setup() in the .INO can specify the equalizer. * The newEqualizer() function has several parameters, the number of equalizer bands, * the frequencies of the bands, and the sidelobe level. All of these can be changed * dynamically. This function can be changed dynamically, but it may be desireable to * mute the audio during the change to prevent clicks. * * This 16-bit integer version adjusts the maximum coefficient size to scale16 in the calls * to both equalizerNew() and getResponse(). Broadband equalizers can work with full-scale * 32767.0f sorts of levels, where narrow band filtering may need smaller values to * prevent overload. Experiment and check carefully. Use lower values if there are doubts. * * For a pass-through function, something like this (which can be intermixed with fancy equalizers): * float32_t fBand[] = {10000.0f, 22058.5f}; * float32_t dbBand[] = {0.0f, 0.0f}; * equalize1.equalizerNew(2, &fBand[0], &dbBand[0], 4, &equalizeCoeffs[0], 30.0f, 32767.0f); * * Measured Q15 timing of update() for a 128 sample block on a T3.6: * Fixed time 11.6 microseconds * Per FIR Coefficient time 5.6 microseconds * Total for 200 FIR Coefficients = 1140 microseconds (39.3% of fs=44117 Hz available time) * * Copyright (c) 2020 Bob Larkin * Any snippets of code from PJRC _filter_equalizer_h #define _filter_equalizer_h #include "Arduino.h" #include "arm_math.h" #include "Audio.h" #include "AudioStream.h" #ifndef MF_PI #define MF_PI 3.1415926f #endif // Temporary timing test #define TEST_TIME_EQ 0 #define EQUALIZER_MAX_COEFFS 250 #define ERR_EQ_BANDS 1 #define ERR_EQ_SIDELOBES 2 #define ERR_EQ_NFIR 3 class AudioFilterEqualizer : public AudioStream { public: AudioFilterEqualizer(void): AudioStream(1,inputQueueArray) { // Initialize FIR instance (ARM DSP Math Library) with default simple passthrough FIR if (arm_fir_init_q15(&fir_inst, nFIRused, (q15_t *)cf16, &StateQ15[0], AUDIO_BLOCK_SAMPLES) != ARM_MATH_SUCCESS) { cf16 = NULL; } } uint16_t equalizerNew(uint16_t _nBands, float32_t *feq, float32_t *adb, uint16_t _nFIR, int16_t *_cf, float32_t kdb, float32_t scale16); void getResponse(uint16_t nFreq, float32_t *rdb, float32_t scale16); void update(void); private: audio_block_t *inputQueueArray[1]; uint16_t block_size = AUDIO_BLOCK_SAMPLES; int16_t firStart[4] = {0, 32767, 0, 0}; // Initialize to passthrough int16_t* cf16 = firStart; // pointer to current coefficients uint16_t nFIR = 4; // Number of coefficients uint16_t nFIRdesign = 3; // used in designing filter uint16_t nFIRused = 4; // Adjusted nFIR, nFIR-1 for nFIR odd. uint16_t nBands = 2; float32_t sample_rate_Hz = AUDIO_SAMPLE_RATE; // *Temporary* - TEST_TIME allows measuring time in microseconds for each part of the update() #if TEST_TIME_EQ elapsedMicros tElapse; int32_t iitt = 999000; // count up to a million during startup #endif // ARM DSP Math library filter instance arm_fir_instance_q15 fir_inst; int16_t StateQ15[AUDIO_BLOCK_SAMPLES + EQUALIZER_MAX_COEFFS]; // max, max /* float i0f(float x) Returns the modified Bessel function Io(x). * Algorithm is based on Abromowitz and Stegun, Handbook of Mathematical * Functions, and Press, et. al., Numerical Recepies in C. * All in 32-bit floating point */ float i0f(float x) { float af, bf, cf; if( (af=fabsf(x)) < 3.75f ) { cf = x/3.75f; cf = cf*cf; bf=1.0f+cf*(3.515623f+cf*(3.089943f+cf*(1.20675f+cf*(0.265973f+ cf*(0.0360768f+cf*0.0045813f))))); } else { cf = 3.75f/af; bf=(expf(af)/sqrtf(af))*(0.3989423f+cf*(0.0132859f+cf*(0.0022532f+ cf*(-0.0015756f+cf*(0.0091628f+cf*(-0.0205771f+cf*(0.0263554f+ cf*(-0.0164763f+cf*0.0039238f)))))))); } return bf; } }; #endif And two simple .INO files to show usage followAnd two simple .INO files to show usage followCode:/* AudioFilterEqualizer_I16.cpp * * Bob Larkin, W7PUA 14 May 2020 * * See AudioFilterEqualizer_I16.h for much more explanation on usage. * * Copyright (c) 2020 Bob Larkin * Any snippets of code from PJRC and "AudioFilterEqualizer_I16.h" void AudioFilterEqualizer::update(void) { audio_block_t *block, *block_new; #if TEST_TIME_EQ if (iitt++ >1000000) iitt = -10; uint32_t t1, t2; t1 = tElapse; #endif block = receiveReadOnly(); if (!block) return; // Check for coefficients if (cf16 == NULL) { release(block); return; } // get a block for the FIR output block_new = allocate(); if (block_new) { //apply the FIR arm_fir_q15(&fir_inst, block->data, block_new->data, AUDIO_BLOCK_SAMPLES); transmit(block_new); // send the FIR output release(block_new); } release(block); #if TEST_TIME_EQ t2 = tElapse; if(iitt++ < 0) {Serial.print("At AudioEqualizer end, microseconds = "); Serial.println (t2 - t1); } t1 = tElapse; #endif } /* equalizerNew() calculates the Equalizer FIR filter coefficients. Works from: * uint16_t equalizerNew(uint16_t _nBands, float32_t *feq, float32_t *adb, uint16_t _nFIR, int16_t *_cf, float32_t kdb, float32_t scale16) * nBands Number of equalizer bands * feq Pointer to array feq[] of nBands breakpoint frequencies, fractions of sample rate, Hz * adb Pointer to array aeq[] of nBands levels, in dB, for the feq[] defined frequency bands * nFIR The number of FIR coefficients (taps) used in the equalzer * cf Pointer to an array of int16 to hold FIR coefficients * kdb A parameter that trades off sidelobe levels for sharpness of band transition. * kdb=30 sharp cutoff, higher sidelobes * kdb=60 slow cutoff, low sidelobes * scale16 A float number that sets the maximum int value for coefficients. Max 32768.0f * * The arrays, feq[], aeq[] and cf[] are supplied by the calling .INO * * Returns: 0 if successful, or an error code if not. * Errors: 1 = ERR_EQ_BANDS = Too many bands, 50 max * 2 = ERR_EQ_SIDELOBES = Sidelobe level out of range, must be > 0 * 3 = ERR_EQ_NFIR = nFIR out of range * * Note - This function runs at setup time, and there is no need to fret about * processor speed. Likewise, local arrays are created on the stack and memory space is * available for other use when this function closes. */ uint16_t AudioFilterEqualizer::equalizerNew(uint16_t _nBands, float32_t* feq, float32_t* adb, uint16_t _nFIR, int16_t* pcf16, float32_t kdb, float32_t scale16) { uint16_t i, j; uint16_t nHalfFIR; float32_t beta, kbes; float32_t q, xj2, scaleXj2, WindowWt; float32_t cf[250]; float32_t fNorm[50]; // Normalized to the sampling frequency float32_t aVolts[50]; // Convert from dB to "quasi-Volts" cf16 = pcf16; // Set the private copies nFIR = _nFIR; nBands = _nBands; // Check range of nFIR. q15 FIR requires even number of coefficients, // but for historic reasons, we design odd number FIR. So add a // variable nFIRused that is even, and one more than the design value. if (nFIR<4 || nFIR>EQUALIZER_MAX_COEFFS) return ERR_EQ_NFIR; if (2*(nFIR/2) == nFIR) { // nFIR even nFIRdesign = nFIR - 1; nFIRused = nFIR; } else { // nFIR odd nFIRdesign = nFIR - 2; // Avoid this nFIRused = nFIR - 1; } nHalfFIR = (nFIRdesign - 1)/2; // If nFIRdesign=199, nHalfFIR=99 for (int kk = 0; kk<nFIRdesign; kk++) // To be sure, zero the coefficients cf[kk] = 0.0f; // Convert dB to Voltage ratios, frequencies to fractions of sampling freq if(nBands <2 || nBands>50) return ERR_EQ_BANDS; for (i=0; i<nBands; i++) { aVolts[i]=powf(10.0, (0.05*adb[i])); fNorm[i]=feq[i]/sample_rate_Hz; } /* Find FIR coefficients, the Fourier transform of the frequency * response. This is done by dividing the response into a sequence * of nBands rectangular frequency blocks, each of a different level. * We can precalculate the sinc Fourier transform for each rectangular band. * The linearity of the Fourier transform allows us to sum the transforms * of the individual blocks to get pre-windowed coefficients. * * Numbering example for nFIRdesign==199: * Subscript 0 to 98 is 99 taps; 100 to 198 is 99 taps; 99+1+99=199 taps * The center coef ( for nFIRdesign=199 taps, nHalfFIR=99 ) is a * special case that comes from sin(0)/0 and treated first: */ cf[nHalfFIR] = 2.0f*(aVolts[0]*fNorm[0]); // Coefficient "99" for(i=1; i<nBands; i++) { cf[nHalfFIR] += 2.0f*aVolts[i]*(fNorm[i]-fNorm[i-1]); } for (j=1; j<=nHalfFIR; j++) { // Coefficients "100 to 198" q = MF_PI*(float32_t)j; // First, deal with the zero frequency end band that is "low-pass." cf[j+nHalfFIR] = aVolts[0]*sinf(fNorm[0]*2.0f*q)/q; // and then the rest of the bands that have low and high frequencies for(i=1; i<nBands; i++) cf[j+nHalfFIR] += aVolts[i]*( (sinf(fNorm[i]*2.0f*q)/q) - (sinf(fNorm[i-1]*2.0f*q)/q) ); } /* At this point, the cf[] coefficients are simply truncated sin(x)/x, creating * very high sidelobe responses. To reduce the sidelobes, a windowing function is applied. * This has the side affect of increasing the rate of cutoff for sharp frequency changes. * The only windowing function available here is that of James Kaiser. This has a number * of desirable features. These include being able to tradeoff sidelobe level * for rate of cutoff cutoff between frequency bands. * We specify it in terms of kdb, the highest sidelobe, in dB, next to a sharp cutoff. For * calculating the windowing vector, we need a Kaiser parameter beta, found as follows: */ if (kdb<0) return ERR_EQ_SIDELOBES; if (kdb>50) beta = 0.1102f*(kdb-8.7f); else if (kdb>20.96f && kdb<=50.0f) beta = 0.58417f*powf((kdb-20.96f), 0.4f) + 0.07886f*(kdb-20.96f); else beta=0.0f; // i0f is the floating point in & out zero'th order modified Bessel function kbes = 1.0f / i0f(beta); // An additional derived parameter used in loop // Apply the Kaiser window, j = 0 is center coeff window value scaleXj2 = 2.0f/(float32_t)nFIRdesign; scaleXj2 *= scaleXj2; for (j=0; j<=nHalfFIR; j++) { // For 199 Taps, this is 0 to 99 xj2 = (int16_t)(0.5f+(float32_t)j); xj2 = scaleXj2*xj2*xj2; WindowWt=kbes*(i0f(beta*sqrtf(1.0-xj2))); cf[nHalfFIR + j] *= WindowWt; // Apply the Kaiser window to upper half cf[nHalfFIR - j] = cf[nHalfFIR +j]; // and create the lower half } // Find the biggest to decide the scaling factor for the FIR filter. // Then we will scale the coefficients according to scale16 float32_t cfmax = 0.0f; for (j=0; j<=nHalfFIR; j++) // 0 to 99 for nFIRdesign=199 if (cfmax < fabsf(cf[j])) cfmax=fabsf(cf[j]); // scale16 is a float number, such as 16384.0, that sets the maximum +/- // value for coefficients. This is a complex subject that needs more discussion // than we can put here. The following scales the coefficients and converts to 16 bit: for (j=0; j<nFIRdesign; j++) cf16[j] = (int)(scale16*cf[j]/cfmax); // nFIRused id always even. nFIRdesign is always odd. So add a zero cf16[nFIRdesign] = 0; // The following puts the numbers into the fir_inst structure arm_fir_init_q15(&fir_inst, nFIRused, (int16_t *)cf16, &StateQ15[0], (uint32_t)block_size); return 0; } /* Calculate response in dB. Leave nFreq-point-result in array rdb[] that is supplied * by the calling .INO See Parks and Burris, "Digital Filter Design," p27 (Type 1). */ void AudioFilterEqualizer::getResponse(uint16_t nFreq, float32_t *rdb, float32_t scale16) { uint16_t i, j; float32_t bt; float32_t piOnNfreq; uint16_t nHalfFIR; float32_t cf[nFIR]; nHalfFIR = (nFIRdesign - 1)/2; // If nFIRdesign=199, nHalfFIR=99 for (i=0; i<nFIRdesign; i++) cf[i] = ((float32_t)cf16[i]) / scale16; piOnNfreq = MF_PI / (float32_t)nFreq; for (i=0; i<nFreq; i++) { bt = cf[nHalfFIR]; // Center coefficient for (j=0; j<nHalfFIR; j++) // Add in the others twice, as they are symmetric bt += 2.0f*cf[j]*cosf(piOnNfreq*(float32_t)((nHalfFIR-j)*i)); rdb[i] = 20.0f*log10f(fabsf(bt)); // Convert to dB } } Code:/* TestEqualizer2.ino Bob Larkin 10 May 2020 * This is a test of the Filter Equalizer for Teensy Audio. * It also tests the .getResponse() function for determining * the actual filter response in dB. * This version is for 16-bit Teensy Audio Library. * A float32 version is available. */ #include "Audio.h" #include "AudioFilterEqualizer_I16.h" AudioInputI2S i2s1; AudioSynthWaveformSine sine1; // Test signal AudioFilterEqualizer equalize1; AudioRecordQueue queue1; // The LSB output AudioConnection patchCord1(sine1, 0, equalize1, 0); AudioConnection patchCord2(equalize1, 0, queue1, 0); // This 10-band octave band equalizer is set strangely in order to demonstrate the Equalizer float32_t fBand[] = { 40.0, 80.0, 160.0, 320.0, 640.0, 1280.0, 2560.0, 5120.0, 10240.0, 22058.5}; float32_t dbBand[] = {10.0, 8.0, 6.0, 3.0, -2.0, 0.0, 0.0, 6.0, 10.0, -100}; float32_t scaleCoeff = 16384.0f; // Max allowed size of FIR coefficients, depends on equalizer use int16_t equalizeCoeffs[250]; int16_t dt1[128]; int16_t *pq1, *pd1; int16_t i; float32_t dBResponse[500]; // Show lots of detail for a plot void setup(void) { AudioMemory(10); Serial.begin(300); delay(1000); Serial.println("*** Test Audio Equalizer ***"); // Sine wave is default +/- 8192 max/min sine1.frequency(1000.0f); // Initialize the equalizer with 10 bands, 200 FIR coefficients and -60 dB sidelobes, 16384 Max coefficient uint16_t eq = equalize1.equalizerNew(10, &fBand[0], &dbBand."); // Get frequency response in dB for 500 points, uniformly spaced from 0 to 21058 Hz // this is an AudioFilterEqualizer function called as // void getResponse(uint16_t nFreq, float32_t *rdb, float32_t scale16); equalize1.getResponse(500, dBResponse, scaleCoeff); Serial.println("Freq Hz, Response dB"); for(int16_t m=0; m<500; m++) { // Print the response in Hz, dB, suitable for a spread sheet Serial.print((float32_t)m * 22058.5f / 500.0f); Serial.print(","); Serial.println(dBResponse[m], 7); } i = -10; } void loop(void) { if(i<0) i++; // Get past startup data if(i==0) queue1.begin(); // Print a 128 sample of the filtered output with sine1 as an input. // This "if" will be active for i == 0 if (queue1.available() >= 1 && i >= 0) { pq1 = queue1.readBuffer(); pd1 = &dt1[0]; for(uint k=0; k<128; k++) *pd1++ = *pq1++; i=1; // Only collect 1 block queue1.freeBuffer(); queue1.end(); // No more data to queue1 } if(i == 1) { // Uncomment the next 3 lines to printout a sample of the sine wave. Serial.println("128 Samples: "); for (uint k=0; k<128; k++) Serial.println (dt1[k]); i = 2; } }Bug or success reports on any of this would be appreciated would be appreciated on any of this. It is all a work in progress.Bug or success reports on any of this would be appreciated would be appreciated on any of this. It is all a work in progress.Code:/* TestEqualizer2Audio.ino Bob Larkin 10 May 2020 * This is a test of the Filter Equalizer for Teensy Audio. * Runs two different equalizers, switching every 5 seconds to demonstrate * dynamic filter changing. */ #include "Audio.h" #include "AudioFilterEqualizer_I16.h" // Signals from left ADC to Equalizer to left DAC using Teensy Audio Adaptor AudioInputI2S i2sIn; AudioFilterEqualizer equalize1; AudioOutputI2S i2sOut; AudioConnection patchCord1(i2sIn, 0, equalize1, 0); AudioConnection patchCord2(equalize1, 0, i2sOut, 0); AudioControlSGTL5000 codec; // Some sort of octave band equalizer as a one alternative, 10 bands float32_t fBand1[] = { 40.0, 80.0, 160.0, 320.0, 640.0, 1280.0, 2560.0, 5120.0, 10240.0, 22058.5}; float32_t dbBand1[] = {10.0, 2.0, -2.0, -5.0, -2.0, -4.0, -20.0, 6.0, 10.0, -100}; float32_t scaleCoeff = 32765.0f; // Max allowed size of FIR coefficients; a smidge under 2^15 // To contrast, put a strange bandpass filter as an alternative, 5 bands float32_t fBand2[] = { 300.0, 500.0, 800.0, 1000.0, 22058.5}; float32_t dbBand2[] = {-60.0, 0.0, -20.0, 0.0, -60.0}; int16_t equalizeCoeffs[200]; int16_t k = 0; void setup(void) { AudioMemory(10); Serial.begin(300); delay(1000); Serial.println("*** Test Audio Equalizer ***"); codec.enable(); codec.inputSelect(AUDIO_INPUT_LINEIN); // Initialize the equalizer with 10 bands, fBand1[] 199 FIR coefficients // -65 dB sidelobes, 16384 Max coefficient value uint16_t eq = equalize1.equalizerNew(10, &fBand1[0], &dbBand1."); for (k=0; k<200; k++) Serial.println(equalizeCoeffs[k]); } void loop(void) { // Change between two filters every 5 seconds. // To run with just the 10-band equalizer, comment out the entire loop with "/* ... */" if(k == 0) { k = 1; equalize1.equalizerNew(10, &fBand1[0], &dbBand1[0], 200, &equalizeCoeffs[0], 65, scaleCoeff); } else { // Swap back and forth k = 0; equalize1.equalizerNew(5, &fBand2[0], &dbBand2[0], 190, &equalizeCoeffs[0], 40, scaleCoeff); } delay(5000); // Interrupts will keep the audio going } Thanks, Bob
https://forum.pjrc.com/threads/67429-Help-to-figure-out-why-WAV-files-aren-t-playing?s=dac5cb34dd3b1762273249cdff7e8ed2&goto=nextoldest
CC-MAIN-2021-49
refinedweb
3,649
56.76
Whenever a client connects, the client_connected_handler will run with the following two arguments, which provide a ( reader, writer) pair and allow you to use the stream's high-level API: client_reader: An asyncio.StreamReaderobject. client_writer: An asyncio.StreamWriterobject. The code in the client_connected_handler coroutine displays a message indicating that a client connection has been received and enters a loop that calls another coroutine, the client_reader.read method. It uses the yield from keywords to wait for the StreamReader object to read up to 8192 bytes without blocking. When the client_read.read coroutine finishes its execution, the code displays the read data and makes a call to client_writer.write to send the previously read data back to the client. The client_writer.write method writes the data bytes received as an argument to the transport without blocking. The method calls the corresponding transport method that buffers the data and arranges for it to be sent out asynchronously. If there is no more data to read, the while loop breaks its execution. This way, the server sends the data received back to the client . Now, let's move on to the client code. After creating an event loop named loop, the code makes a call to loop.run_until_complete to execute the simple_echo_client coroutine. This coroutine calls the asyncio.open_connection coroutine, which creates a streaming transport connection to the host and port specified as arguments with the socket type set to SOCK_STREAM. A successful connection returns a ( reader, writer) pair. The code uses the yield from keywords to wait without blocking for the asyncio.open_connection coroutine to create the streaming transport connection with the StreamReader and StreamWriter objects associated with the transport and protocol. Then, the code calls writer.write many times to send a few lines of text to the transport. Because the client expects the server to send back some data, the code enters a loop that calls another coroutine, the reader.readline method. Again, the code uses the yield from keywords to wait for the StreamReader object to read a sequence of bytes ending with \n or until EOF is received. The code displays each read line and the while loop breaks when this line is equal to the one defined as the last line. In order to make the code easier to read, I haven't included all the necessary try…finally blocks. As I explained in the previous installment, you can ignore the existence of the yield from keywords for exception handling purposes. If an exception occurs within any of the coroutines called with the yield from keywords, it will be raised the same way as when you don't use yield from. Working with asyncio.Task As I explained in the previous installment, an asyncio.Task is a coroutine wrapped inside a future and runs as long as the event loop runs. The following lines show an example of another TCP echo server with a line oriented protocol that uses tasks, coroutines, and callbacks. You can execute the following lines in a Python console and then run a telnet client to localhost on port 2222 as explained in the first example. Enter some text, press Enter, and you will see the line coming back from the server. In this case, the code doesn't display messages in the console and simply focuses on handling the connection and the stream readers and writers. import asyncio clients = {} # task -> (reader, writer) def client_connected_handler(client_reader, client_writer): # Start a new asyncio.Task to handle this specific client connection task = asyncio.Task(handle_client(client_reader, client_writer)) clients[task] = (client_reader, client_writer) def client_done(task): # When the tasks that handles the specific client connection is done del clients[task] # Add the client_done callback to be run when the future becomes done task.add_done_callback(client_done) @asyncio.coroutine def handle_client(client_reader, client_writer): # Handle the requests for a specific client with a line oriented protocol while True: # Read a line data = (yield from client_reader.readline()) # Send it back to the client client_writer.write(data) loop = asyncio.get_event_loop() server = loop.run_until_complete(asyncio.start_server(client_connected_handler, 'localhost', 2222)) try: loop.run_forever() finally: loop.close() clients keeps track of all the clients that are connected to the server with tasks of ( reader, writer) pairs. It is usually useful for performing specific operations with clients, such as killing their connections or broadcasting data to all of them. After creating an event loop named loop, the code makes a call to loop.run_until_complete to call the asyncio.start_server coroutine, which starts a socket server bound to the specified host and port and executes the callback specified as an argument for each client connected, client_connected_handler. As happened in the previous example, client_connected_handler is another couroutine and it will be automatically converted to a Task. By this point, you already know that the client_connected_handler receives the two arguments that provide a ( reader, writer) pair. In this case, the code starts a new asyncio.Task to handle this specific client connection; that is, a task of a ( reader, writer) pair. The task executes the handle_client coroutine with client_reader and client_writer as the arguments. The code saves the task of a ( reader, writer) pair in clients, and calls the task.add_done_callback method to add the client_done callback to be run when the future is done. Thus, when handle_client finishes its execution, the client_done callback will be executed and will remove the task from clients. In more complex scenarios, you can use the saved tasks of ( reader, writer) pairs to perform different kinds of operations with all the connected clients. The handle_client code is easy to understand because it just uses the StreamReader and StreamWriter objects to read a line from the client and write back to it with a line-oriented protocol. Conclusion The new asyncio module reboots coding asynchronous I/O with Python and provides an easy-to-use high-level streams API. In truth, it provides everything you need to work with the most common I/O scenarios without installing additional packages. However, when you need greater control for complex scenarios, you can take advantage event loop policies, tasks, specialized synchronization primitives, and specific logging and debugging features. If you want to dive deeper on the additional features provided by the module, browse the many advanced examples in the Google Code repository for asyncio. The repository still uses the previous name for the module (Tulip), but the examples still work with the version included in Python 3.4. Gastón Hillar is a senior contributing editor at Dr. Dobb's. Related Article The New asyncio Module in Python 3.4: Event Loops
http://www.drdobbs.com/open-source/the-new-asyncio-in-python-34-servers-pro/240168408?pgno=2
CC-MAIN-2016-50
refinedweb
1,093
55.54
I have the following test code: import org.specs2.mutable.Specification import org.specs2.specification.{BeforeExample, Before, Scope} class TestSpec extends Specification with BeforeExample { def before = println("before!!!!") "Test" should { "run before 1" in { println("test 1") success } "run before 2" in { println("test 2") success } "run before 3" in { println("test 3") success } } } I expect something like: before!!!! test 1 before!!!! test 2 before!!!! test 3 ... But get the output: before!!!! before!!!! before!!!! test 3 test 2 test 1Test should run before 1 run before 2 run before 3 Why such a strange order? Tests are run in parallel mode? In this synthetic example it does not matter. But if before make a database cleanup or something else this execution order break testing.
http://www.howtobuildsoftware.com/index.php/how-do/b3Ge/scala-scalatest-specs2-wrong-order-of-execution-of-tests
CC-MAIN-2018-47
refinedweb
124
52.46
Aaj ki seekh: MD5 api The MD5 algo takes as input a messge of arbitrary length andproduces a 128-bit message digest or fingerprint of the input. For more info on MD5 refer to the RFC page Just a simple program to show usage of MD5 API to calculate a message digest. #include <stdio.h>#include <md5.h>/* Link with the -lmd5 library *//* gcc -lmd5 <program_name> *//* this program generates a 128 bit MD5 digest of a given string */main(){ int i=0; unsigned char *buffer="We shall generate a message digest of this string"; unsigned char *output=(unsigned char *)malloc(16*sizeof(char)) ; /* store the 128 bit digest in this string */ printf("\n-Using the md5_calc functions ----\n"); md5_calc(output, buffer,strlen(buffer)); for(i=0;i<16;i++) printf("%d:",(int)(output[i])); printf("\n-Using the MD5 functions ----\n"); MD5_CTX context; MD5Init(&context); MD5Update(&context,buffer,strlen(buffer)); MD5Final(output,&context); for(i=0;i<16;i++){ printf("%d:",(int)(output[i])); } free(output);} The output: (the digest is displayed as colon seperated ) --Using the md5_calc functions ----208:123:118:146:90:32:215:98:10:11:23:26:78:229:127:76:-Using the MD5 functions ----208:123:118:146:90:32:215:98:10:11:23:26:78:229:127:76: Posted at 04:37PM Dec 10, 2006 by dakshina in Sun | what | Getting the output of a shell command from a C program using popen Sometimes its necessary to access the output of a shell command(more than just the return value) in a C program. One way could be to redirect it to a file and then access it .The other would be by using the popen function. #include<stdio.h> main(){ char cmd[80]; FILE *fptr; char out[256]; int ret; strcpy(cmd,"ls -l"); fptr = popen(cmd, "r"); while(1){ fgets(out, 256, fptr); if(feof(fptr)) break; puts(out); } ret = pclose(fptr); } /* Noted tested with S10 gcc only ..*/ Posted at 10:52AM Dec 01, 2006 by dakshina in Sun | Configuring apache +SSL service for S10 Just another blog for setting up apache shipped with S10 ... Note:For creating server side certificates a very detailed help can be found @ . And hence I am not rewriting them here. cp /etc/apache2/httpd-conf-example to /etc/apache2/httpd.conf Set the properties : Server name Listen Port numberDocument root export JAVA_HOME=< >/usr/apache2/bin/apachectl start OR #svcadm disable apache2 ;#svcenable apache2 =============================================================== Enabling SSL service on Apache2 # svccfg svc:> select apache2 svc:/network/http:apache2> listprop httpd/ssl httpd/ssl boolean false svc:/network/http:apache2> setprop httpd/ssl=true svc:/network/http:apache2> exit # svcadm disable apache2 # svcadm enable apache2 # svcprop -p httpd/ssl svc:/network/http:apache2 false # svcadm refresh apache2 # svcprop -p httpd/ssl svc:/network/http:apache2 true Posted at 10:58AM Nov 30, 2006 by dakshina in Sun | CGI/Perl script for uploading files Here's a small perl script that I have used for uplaoding files to a webserver. The location can be changed .Rt now it saves the files to /tmp/upload1 #!/usr/bin/perluse CGI ;my $query = new CGI;print $query->header ( );# Expects the client to sends the name of the file to be uploaded in an input field "file" my $filename=$query->param("file");my $$fpath1") || die "Cannot open file";$filename =~ s/.*[\/\\](.*)/$1/;my $upload_filehandle = $query->upload("file");my $buf;while (read($upload_filehandle,$buf,1024)) { print UPLOADFILE $buf;}close UPLOADFILE; #This has been tested on Solaris only # Can be used to transfer binary files also #For WINDOWS the BINMODE option may be needed Posted at 10:31AM Nov 30, 2006 by dakshina in Sun | SSL Certificate Generation .. Something I learnt during writing a SSL Client <which I got stuck up after some time :( >I am using OpenSSL shipped with S10.This can be of help for those who wish to create a CA (self signed for test purpose and sign their own certificates using this CA.) A. Create new CA (Certification Authority) The CA.pl is located at in Solaris 10 /usr/sfw/bin Change the perl path to /usr/bin/perl in line 1 > CA.pl -newca > cp ./demoCA/cacert.pem . > cp ./demoCA/private/cakey.pem . > openssl x509 -text -in cacert.pem B. Generate RSA key and second level CA > openssl genrsa -out ca2key.pem > openssl req -new -key ca2key.pem -out ca2req.pem > openssl ca -cert cacert.pem -keyfile cakey.pem \ -out ca2cert.pem -infiles ca2req.pem > openssl verify -CAfile cacert.pem ca2cert.pem C. Sign RSA key with second level CA > openssl req -new -key rsakey.pem -out rsareq.pem > openssl ca -cert ca2cert.pem -keyfile ca2key.pem \ -out rsacert.pem -infiles rsareq.pem > openssl verify -CAfile cacert.pem -untrusted ca2cert.pem rsacert.pem Posted at 10:14AM Nov 30, 2006 by dakshina in Sun | Random | i'm here !!! My first blog :) @ blogs.sun.com Abt me : I've been @Sun for nearly 2 years now. And i like it here. So hello everybody !! Posted at 11:12AM Aug 29, 2006 by dakshina in Sun | Today's Page Hits: 24
http://blogs.sun.com/dakshina/category/Sun
crawl-001
refinedweb
848
64.3
The System.Web.UI.MobileControls namespace includes the ASP.NET mobile controls. Many of these classes closely resemble the web form controls in the System.Web.UI.WebControls namespace. However, there are two key differences. First of all, because mobile devices typically use lighter-weight browsers that don't offer rich client features like JavaScript and Dynamic HTML, the mobile controls can only offer a subset of the web control functionality. Also, because mobile controls need to render to different types of markup (like cHTML, HTML, and WML), each mobile control needs the support of a set of control adapters. You'll find the control adapters for each control in the System.Web.UI.MobileControls.Adapters namespace. Figure 40-1 through Figure 40-5 show the types in this namespace.
http://etutorials.org/Programming/Asp.net/Part+III+Namespace+Reference/Chapter+40.+The+System.Web.UI.MobileControls+Namespace/
CC-MAIN-2017-04
refinedweb
130
56.96
This document contains information about mechanisms available in mod_wsgi for automatic reloading of source code when an application is changed and any issues related to those mechanisms. Note: How source code reloading has been handled has changed between mod_wsgi 1.X and mod_wsgi 2.X. This document has been updated so as to no longer refer to mod_wsgi 1.X. Thus, ensure you are running the most up to date version of mod_wsgi 2.X otherwise you may find that some of what is described here will not work as you expect. What is achievable in the way of automatic source code reloading depends on which mode your WSGI application is running. If your WSGI application is running in embedded mode then what happens when you make code changes is largely dictated by how Apache works, as it controls the processes handling requests. In general, if using embedded mode you will have no choice but to manually restart Apache in order for code changes to be used. If using daemon mode, because mod_wsgi manages directly the processes handling requests and in which your WSGI application runs, there is more avenue for performing automatic source code reloading. As a consequence, it is important to understand what mode your WSGI application is running in. If you are running on Windows, are using Apache 1.3, or have not used WSGIDaemonProcess/WSGIProcessGroup directives to delegate your WSGI application to a mod_wsgi daemon mode process, then you will be using embedded mode. If you are not sure whether you are using embedded mode or daemon mode, then substitute your WSGI application entry point with: def application(environ, start_response): status = '200 OK' if not environ['mod_wsgi.process_group']: output = 'EMBEDDED MODE' else: output = 'DAEMON MODE' response_headers = [('Content-Type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] If your WSGI application is running in embedded mode, this will output to the browser 'EMBEDDED MODE'. If your WSGI application is running in daemon mode, this will output to the browser 'DAEMON MODE'. However you have configured Apache to mount your WSGI application, you will have a script file which contains the entry point for the WSGI application. This script file is not treated exactly like a normal Python module and need not even use a '.py' extension. It is even preferred that a '.py' extension not be used for reasons described below. For embedded mode, one of the properties of the script file is that by default it will be reloaded whenever the file is changed. The primary intent with the file being reloaded is to provide a second chance at getting any configuration in it and the mapping to the application correct. If the script weren't reloaded in this way, you would need to restart Apache even for a trivial change to the script file. Do note though that this script reloading mechanism is not intended as a general purpose code reloading mechanism. Only the script file itself is reloaded, no other Python modules are reloaded. This means that if modifying normal Python code files which are used by your WSGI application, you will need to trigger a restart of Apache. For example, if you are using Django in embedded mode and needed to change your 'settings.py' file, you would still need to restart Apache. That only the script file and not the whole process is reloaded also has a number of implications and imposes certain restrictions on what code in the script file can do or how it should be implemented. The first issue is that when the script file is imported, if the code makes modifications to sys.path or other global data structures and the changes are additive, checks should first be made to ensure that the change has not already been made, else duplicate data will be added every time the script file is reloaded. This means that when updating sys.path, instead of using: import sys sys.path.append('/usr/local/wsgi/modules') the more correct way would be to use: import sys path = '/usr/local/wsgi/modules' if path not in sys.path: sys.path.append(path) This will ensure that the path doesn't get added multiple times. Even where the script file is named so as to have a '.py' extension, that the script file is not treated like a normal module means that you should never try to import the file from another code file using the 'import' statement or any other import mechanism. The easiest way to avoid this is not use the '.py' extension on script files or never place script files in a directory which is located on the standard module search path, nor add the directory containing the script into sys.path explicitly. If an attempt is made to import the script file as a module the result will be that it will be loaded a second time as an independent module. This is because script files are loaded under a module name which is keyed to the full absolute path for the script file and not just the basename of the file. Importing the script file directly and accessing it will therefore not result in the same data being accessed as exists in the script file when loaded. Because the script file is not treated like a normal Python module also has implications when it comes to using the "pickle" module in conjunction with objects contained within the script file. In practice what this means is that neither function objects, class objects or instances of classes which are defined in the script file should be stored using the "pickle" module. The technical reasons for the limitations on the use of the "pickle" module in conjunction with objects defined in the script file are further discussed in the document "Issues With Pickle Module". The act of reloading script files also means that any data previously held by the module corresponding to the script file will be deleted. If such data constituted handles to database connections, and the connections are not able to clean up themselves when deleted, it may result in resource leakage. One should therefore be cautious of what data is kept in a script file. Preferably the script file should only act as a bridge to code and data residing in a normal Python module imported from an entirely different directory. As explained above, the only facility that mod_wsgi provides for reloading source code files in embedded mode, is the reloading of just the script file providing the entry point for your WSGI application. If you don't have a choice but to use embedded mode and still desire some measure of automatic source code reloading, one option available which works for both Windows and UNIX systems is to force Apache to recycle the Apache server child process that handles the request automatically after the request has completed. To enable this, you need to modify the value of the MaxRequestsPerChild directive in the Apache configuration. Normally this would be set to a value of '0', indicating that the process should never be restarted as a result of the number of requests processed. To have it restart a process after every request, set it to the value '1' instead. MaxRequestsPerChild 1 Do note however that this will cause the process to be restarted after any request. That is, the process will even be restarted if the request was for a static file or a PHP application and wasn't even handled by your WSGI application. The restart will also occur even if you have made no changes to your code. Because a restart happens regardless of the request type, using this method is not recommended. Because of how the Apache server child processes are monitored and restarts handled, it is technically possible that this method will yield performance which is worse than CGI scripts. For that reason you may even be better off using a CGI/WSGI bridge to host your WSGI application. At least that way the handling of other types of requests, such as for static files and PHP applications will not be affected. If using mod_wsgi daemon mode, what happens when the script file is changed is different to what happens in embedded mode. In daemon mode, if the script file changed, rather than just the script file being reloaded, the daemon process which contains the application will be shutdown and restarted automatically. Detection of the change in the script file will occur at the time of the first request to arrive after the change has been made. The way that the restart is performed does not affect the handling of the request, with it still being processed once the daemon process has been restarted. In the case of there being multiple daemon processes in the process group, then a cascade effect will occur, with successive processes being restarted until the request is again routed to one of the newly restarted processes. In this way, restarting of a WSGI application when a change has been made to the code is a simple matter of touching the script file if daemon mode is being used. Any daemon processes will then automatically restart without the need to restart the whole of Apache. So, if you are using Django in daemon mode and needed to change your 'settings.py' file, once you have made the required change, also touch the script file containing the WSGI application entry point. Having done that, on the next request the process will be restarted and your Django application reloaded. If you are using daemon mode of mod_wsgi, restarting of processes can to a degree also be controlled by a user, or by the WSGI application itself, without restarting the whole of Apache. To force a daemon process to be restarted, if you are using a single daemon process with many threads for the application, then you can embed a page in your application (password protected hopefully), that sends an appropriate signal to itself. This should only be done for daemon processes and not within the Apache child processes, as sending such a signal within a child process may interfere with the operation of Apache. That the code is executing within a daemon process can be determined by checking the 'mod_wsgi.process_group' variable in the WSGI environment passed to the application. The value will be non empty if a daemon process. if environ['mod_wsgi.process_group'] != '': import signal, os os.kill(os.getpid(), signal.SIGINT) This will cause the daemon process your application is in to shutdown. The Apache process supervisor will then automatically restart your process ready for subsequent requests. On the restart it will pick up your new code. This way you can control a reload from your application through some special web page specifically for that purpose. You can also send this signal from an external application, but a problem there may be identifying which process to send the signal to. If you are running the daemon process(es) as a distinct user/group to Apache and each application is running as a different user then you could just look for the Apache (httpd) processes owned by the user the application is running as, as opposed to the Apache user, and send them all signals. If the daemon process is running as the same user as Apache or there are distinct applications running in different daemon processes but as the same user, knowing which daemon processes to send the signal may be harder to determine. Either way, to make it easier to identify which processes belong to a daemon process group, you can use the 'display-name' option to the WSGIDaemonProcess to name the process. On many platforms, when this option is used, that name will then appear in the output from the 'ps' command and not the name of the actual Apache server binary..() This would be used by importing into the script file the Python module containing the above code, starting the monitoring system and adding any additional non Python files which should be tracked. import os import monitor monitor.start(interval=1.0) monitor.track(os.path.join(os.path.dirname(__file__), 'site.cf')) def application(environ, start_response): ... Where needing to add many non Python files in a directory hierarchy, such as template files which would otherwise be cached within the running process, the os.path.walk() function could be used to traverse all files and add required files based on extension or other criteria using the 'track()' function. This mechanism would generally work adequately where a single daemon process is used within a process group. You would need to be careful however when multiple daemon processes are used. This is because it may not be possible to synchronise the checks exactly across all of the daemon processes. As a result you may end up with the daemon processes running a mixture of old and new code until they all synchronise with the new code base. This problem can be minimised by defining a short interval time between scans, however that will increase the overhead of the checks. Using such an approach may in some cases be useful if using mod_wsgi as a development platform. It certainly would not be recommended you use this mechanism for a production system. The reasons for not using it on a production system is due to the additional overhead and chance that daemon processes are restarted when you are not expecting them to be. For example, in a production environment where requests are coming in all the time, you do not want a restart triggered when you are part way through making a set of changes which cover multiple files as likely then that an inconsistent set of code will be loaded and the application will fail. Note that you should also not use this mechanism on a system where you have configured mod_wsgi to preload your WSGI application as soon as the daemon process has started. If you do that, then the monitor thread will be recreated immediately and so for every single code change on a preloaded file you make, the daemon process will be restarted, even if there is no intervening request. If preloading was really required, the example code would need to be modified so as to not use signals to restart the daemon process, but reset to zero the variable saved away in the WSGI script file that records the modification time of the script file. This will have the affect of delaying the restart until the next request has arrived. Because that variable holding the modification time is an internal implementation detail of mod_wsgi and not strictly part of its published API or behaviour, you should only use that approach if it is warranted. On the Windows platform there is no daemon mode only embedded mode. The MPM used on Apache is the 'winnt' MPM. This MPM is like the worker MPM on UNIX systems except that there is only one process. Being embedded mode, modifying the WSGI script file only results in the WSGI script file itself being reloaded, the process as a whole is not reloaded. Thus there is no way normally through modifying the WSGI script file or any other Python code file used by the application, of having the whole application reloaded automatically. The recipe in the previous section can be used with daemon mode on UNIX systems to implement an automated scheme for restarting the daemon processes when any code change is made, but because Windows lacks the 'fork()' system call daemon mode isn't supported in the first place. Thus, the only way one can have code changes picked up on Windows is to restart Apache as a whole. Although a full restart is required, Apache on Windows only uses a single child server process and so the impact isn't as significant as on UNIX platforms, where many processes may need to be shutdown and restarted. With that in mind, it is actually possible to modify the prior recipe for restarting a daemon process to restart Apache itself. To achieve this slight of hand, it is necessary to use the Python 'ctypes' module to get access to a special internal Apache function which is available in the Windows version of Apache called 'ap_signal_parent()'. The required change to get this to work is to replace the restart function in the previous code with the following:) Other than that, the prior code would be used exactly as before. Now when any change is made to Python code used by the application or any other monitored files, Apache will be restarted automatically for you. As before, probably recommended that this only be used during development and not on a production system.
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
crawl-002
refinedweb
2,800
56.59
Watchdog timer question Can someone explain the correct use of the watchdog timer... I have various combinations of SiPy or FiPy on Pysense or Expansion Board 3.1's all with the latest firmware as of today. The code is simply reading sensors and sending data to Adafruit.io via wifi, sigfox or lte. Occasionally the code would hang (typically 1 per 200 cycles or thereabouts)...not sure where as they were running on batteries so not connected to the REPL. I added a WDT to the code (single thread) and set it at 3 x the cycle time of the code...still had hangs (still 1 per 200 cycles or thereabouts). I've now put a separate pseudoWDT on some of the Pysense combinations again set a 3 x the cycle time of the code, but this time in a separate thread to the main code and the WDT..so far (24 hours) no hangs...so this raises the question, should the WDT be on a it's own thread. Doesn't feeding the watchdog simply delay the reset? That doesn't help my situation. I already have a watchdog timer in the code and when it hung, the watchdog didn't activate so I'm not sure if the watchdog hung because it was in the same thread as the code or if WDT class runs in a separate thread by design which implies a bigger problem with the WDT in general. - misterlisty last edited by Just create a timer alarm that feeds the watchdog..if you program hangs it will reboot. class WD: def init(self): self.seconds = 0 self.__alarm = Timer.Alarm(self._seconds_handler, 3, periodic=True) def _seconds_handler(self, alarm): wdt.feed()
https://forum.pycom.io/topic/4745/watchdog-timer-question
CC-MAIN-2021-43
refinedweb
287
75
>>> On 2/1/2008 at 2:36 AM, Tim Deegan <Tim.Deegan@xxxxxxxxxx> wrote: > Hi, > > At 14:24 +0800 on 01 Feb (1201875864), Su, Disheng wrote: >> REPS prefix emulation for INS and OUTS is already in xen. Can we enable >> REPS for "stos" and "movs" in paging mode? Although it doesn't benefit a >> lot for normal hvm guest, in virtualpc case, I have to do that. >> VirtualPC will call "reps stos"(ecx=0x400, eax=0) a lot, about 1000 >> times per second. It seems to clear a L1 page table page, it can't be >> fast unshadowed in check_for_early_unshadow and it's even ReadOnly in >> guest page table(CR0.wp = 0). So we get 0x400 * 1000 times page faults >> per second, so CPU cycle is almost wasted on it. > > This patch looks fine to me; I'll let Keir decide whether the extra 200 > lines or so of shadow/emulate code is worthwhile for running VPC under Xen. > > Keir, even if we don't take this, please do take this hunk: > > diff -r 05b3bdb3b7fa xen/arch/x86/mm/shadow/multi.c > --- a/xen/arch/x86/mm/shadow/multi.c Wed Jan 30 00:09:03 2008 +0800 > +++ b/xen/arch/x86/mm/shadow/multi.c Wed Jan 30 00:09:04 2008 +0800 > @@ -4076,8 +4091,8 @@ static void *emulate_map_dest(struct vcp > sh_ctxt->mfn2 = emulate_gva_to_mfn(v, (vaddr + bytes - 1) & > PAGE_MASK, > sh_ctxt); > if ( !mfn_valid(sh_ctxt->mfn2) ) > - return ((mfn_x(sh_ctxt->mfn1) == BAD_GVA_TO_GFN) ? > + return ((mfn_x(sh_ctxt->mfn2) == BAD_GVA_TO_GFN) ? > MAPPING_EXCEPTION : MAPPING_UNHANDLEABLE); > > /* Cross-page writes mean probably not a pagetable */ > sh_remove_shadows(v, sh_ctxt->mfn2, 0, 0 /* Slow, can fail */ ); > > Cheers, > > Tim. If I could chime in here, I'm all for better support of CR0.WP = 0 guests. NetWare has traditionally run in this mode, and although we've recently made it capable of running with WP bit set as part of the paravirtualization effort, there would be benefit to running older versions as HVM guests. I haven't had the cycles myself to work on that, but with all the other changes to real mode support, it looks like we're really close to having NetWare run fully virtualized. (Another gottcha is that we write the debug registers in real mode, but that should be a fairly simple hypervisor fix.) Thanks to everyone who has worked on getting HVM support more robust. - Bru.
https://lists.xenproject.org/archives/html/xen-devel/2008-02/msg00035.html
CC-MAIN-2021-39
refinedweb
390
63.19
#include <wx/creddlg.h> This class represents a dialog that requests a user name and a password from the user. Currently it is implemented as a generic wxWidgets dialog on all platforms. Simple example of using this dialog assuming MyFrame object has a member m_request of wxWebRequest type: Constructor. Use ShowModal() to show the dialog. See Create() method for parameter description. Create the dialog constructed using the default constructor. Returns the credentials entered by the user. This should be called if ShowModal() returned wxID_OK. Sets the current user name. This function may be called before showing the dialog to provide the default value for the user name, if it's different from the one given at the creation time.
https://docs.wxwidgets.org/3.1.5/classwx_credential_entry_dialog.html
CC-MAIN-2021-31
refinedweb
119
51.24
// Week 2, Checkpoint: Payroll Program Part 1 // Request payroll info and display a printout in monetary form. import java.util.Scanner; //using class Scanner public class Multiplication { // main method begins execution of Java application public static void main( String args[] ) { // create Scanner to obtain input from command windows Scanner input = new Scanner( System.in ); double rate; // hourly rate double hours; // hours worked double pay; // sum of hourly times hours worked // prompt for and input employee's name System.out.println( "Please enter the employee's name or STOP to end the program: " ); String nameOfEmployee = input.nextLine(); //read a line of text System.out.println(); System.out.println( "Thank you for using the program...goodbye" ); System.out.println( "Enter the employee's hourly pay rate: "); // prompt rate = input.nextInt(); // read first number from user if ( rate > 0 ) System.out.println( "The rate must be a positve number, please enter another number: " ); System.out.println( "Enter the employee's hours worked: "); // prompt hours = input.nextInt(); // read second number from user if ( hours > 0 ) System.out.println( "The rate must be a positve number, please enter another number: " ); pay = hours * rate; // pay due System.out.print( nameOfEmployee ); System.out.printf( " is do $%,.2f\n", pay ); } // end method main } // end class Multiplication Ok, I'm a beginner, can't stand coding but i understand after 10 years in the IT field and three years in management I need to start learning. Here is what I'm trying to do...an failing obviously. I need to make the code continue to request employee information until the user enters stop as the employee name. The hourly rate and hours worked also need to be positive. The current code won't stop at STOP and allows negative numbers. edit: modified title ~ jayman9
http://www.dreamincode.net/forums/topic/20029-while-loops-and-if-constructs/
CC-MAIN-2016-07
refinedweb
295
58.89
Ever since work from home began, my productivity has had its ups and lows until I found the right way to do it. Since then, it has been at a constant high. During the initial days, all of us must have had mixed feelings. I first thought it would be fun for a while to wake up five minutes before the first meeting and spend all day in pajamas. And I thought avoiding traffic and travel was the best thing that could ever happen to me. But later on, when burnout kicked me pretty hard, I decided to not let it get to me. I thought it was just me, but later on, when I’d talked to my colleagues who live with their parents or the ones that have kids at home or a noisy neighbor who loves to yell all the time, I realized what the real deal was. Also, being prone to get distracted or to procrastinate has also become extremely common as we are all at home in our own space. We’re the master of our time now as there’s no one around us that we’re scared will judge us. … Being a developer, it is essential to worry about the performance of the application that we build. Memory management is one of the major factors that affects the performance of an application. Many a time JavaScript developers tend to not consciously think about memory management because JavaScript automatically allocates memory space when an object is created and collects them later as garbage when they’re not used. These behind the scenes behaviour of garbage collection in JavaScript could cause a lot of confusion. Read further to understand this better. There are three steps involved in the memory life cycle of almost all programming… Most of the Web Developers use Chrome for their development. According to Wikipedia about 65% of the world population seem to use Chrome, this is one of the reasons why developers choose Chrome to test an application built by them. So if you’re a developer who uses Chrome a lot, here’s a bunch of Chrome extensions that would make your life a lot easier. Colorzilla basically provides a color picker widget along with a lot of other features. … Since 2015, JavaScript has been receiving constant yearly updates with new features being added. Though ES2021/ES12 will be released next year, we can already have a look at what’s to come since many features already reached Stage 4(Finished) specification and will be included in the specification. In this article, I will discuss the features that have already reached stage four and are added to the Google Chrome V8 engine. String.prototype.replace is an existing method that allows us to replace a pattern in a string with something else. One thing to remember in this method is that, when we replace a pattern, only the first occurrence of that pattern in the string would be replaced. … In React, we use setState() to update the state of any component. Now setState() does not immediately mutate this state, rather it creates a pending state transition. Accessing the state immediately after calling setState() returns the existing value and not the updated one. As beginners in React, I’m sure most of us would have often faced this problem. There is no guarantee of synchronous operation on setState() and calls may be batched for performance gains. It is easy to forget that the setState() is asynchronous, making it tricky for us to debug issues in your code. The setState() also does not return a Promise. … I hope you could find at least some of the memes relatable. But hey, aren’t they chucklesome? Laugh out loud and code intensely :) JavaScript Modules are separated by files and loaded asynchronously. Exports are defined using the export keyword and imports are defined using import keyword. While the basics of imports and exports are simple to understand, there are many other ways to work with ES Modules. In this article, we’ll go over all of the ways in which you can export and import within your modules. We’ll be looking into three types of exports: Every module has a single default export, which represents the main value that is exported from that module. … As described in the MDN docs: “The optional chaining operator ( ?.) permits reading the value of a property located deep within a chain of connected objects without having to expressly validate that each reference in the chain is valid.” In other words, optional chaining with ?. is a safe way to access nested object properties — even if an intermediate property doesn’t exist. An object can have a very different nested structure of objects. Working with data in JavaScript frequently involves situations where you aren’t sure that something exists. As an example, consider objects for app data. Most of our apps have an author in the app.author property with the first name app.author.firstName, … React Router is a collection of navigational components that are widely used in almost every React app. It is one of the most popular dependencies in React. The next major version(v6) is yet to be released. It’s still in the beta stage at the time of this writing. Since it’s always nice to explore new features, I’d like to share my thoughts and give you all a sneak peek of the upcoming features and changes to come. React Router v6 is a lot smaller than its predecessor. The size is actually reduced by 60%, which is a good thing. … It has been around three years since React v16 was first released, and finally, the wait is over for the next major release. The React developer team promises that update v17 is incredibly important for the future of React, but it was also mentioned that no new features have been added. You might be wondering why it was released then. In this article, I’ll be listing out the changes made in the latest version. React 17 is primarily focused on making it easier to upgrade React itself. Though it is unusual that there are no developer-facing features in this update, the main objective of this release is to make sure it is safe to embed a tree managed by one version of React inside a tree managed by another version of React. … About
https://harshaktg.medium.com/?source=post_internal_links---------5----------------------------
CC-MAIN-2020-50
refinedweb
1,073
61.97
QOTW: "Giving full access rights to a secretary or new programmer ought to insure an occasional random file deletion." -- Raymond Hettinger "I always use join, but that's probably because that method is more likely to run code that I once wrote. Never trust code written by a man who uses defines to create his own C syntax." -- Fredrik Lundh Peter Hansen is right in complimenting Sorin Marti on his excellent problem description, and helps with reading binary data from a file. <> Fredrik Lundh shows how easy it is to write macro-like extensions for your Python application. <> Tim Delaney provides a mutable string class that can be used to concatenate strings efficiently and easily, instead of using myPage+="more html..." or "n".join(strings). <> Bengt Richter gives a possible explanation of the (historic?) distiction between carriage return (CR) and line feed (LF), and how this bears on problems with "shebang" scripts. <> Alan Kennedy explains why compiling to C code is not always guaranteed to speed things up, and why PyPy (Python written in Python!) could help in the performance department. <> Raymond Hettinger explains the purpose of some of the recent iteration constructs (itertools), and why things are this way. <> Edvard Majakari shares interesting opinions on Unit Testing and test-driven development. <> PyRex 0.8, a language for writing Python extension modules. <> M2Crypto 0.11, a crypto and SSL toolkit for Python. <> PythonCAD 8th release, a CAD package written in Python. <> DocUtils 0.3, a system for processing plaintext documentation (reStructuredText markup). <> Scratchy 0.4, an Apache log parser and HTML report generator. <> ClientForm 0.0.10 and 0.1.3a, a Python module for handling HTML forms on the client. <> SCons 0.90, a software construction tool (build tool, or make tool). <> Twisted 1.0.6, an event-driven networking framework for server and client applications. <> David Mertz, our own Lulu, presents Twisted to a wider audience. <> TTFQuery 0.2.4, builds on the FontTools module to allow you to query TTF font-files for metadata and glyph outlines. <>.
http://www.linuxtoday.com/developer/2003070101526NWCYDV
CC-MAIN-2017-22
refinedweb
340
65.83
by Zoran Horvat Feb 15, 2014 Given a string, write a function which replaces multiple white space characters with single space. Function should return the modified string. Example: If string " yet another sunny day " is passed to the function, it should return string " yet another sunny day ". Note that heading and trailing space characters have remained. In this exercise we are dealing with character strings. Despite the possible view that they are representing words and sentences, strings are not much more than common arrays of characters. We are asked to remove multiple spaces and replace them with exactly one space character each. Starting from the stand that strings are arrays of characters, we could reformulate the problem statement into a more convenient form. Instead of insisting on replacing the spaces, we could say that in each group of consecutive spaces all spaces except one should be removed from the string. This immediately gives the idea that we could walk through the string and count consecutive spaces. Each time a non-space character is encountered, the counter is reset to zero. Repeated spaces will be discovered by observing their associated count, which indicates their position within the current group and will always be greater than one. All such spaces will be skipped, which means that only the first space in each group will be pushed to the output. Now that we have formulated the solution in this way, it is obvious that we do not really need the counter. It is sufficient to just keep a Boolean flag which would indicate whether the space character has been pushed to the output or not. Here is the pseudocode of the function which performs this transformation: function RemoveMultipleSpaces(s) s - string begin t - empty string wasLastSpace = false for each character c in s begin if c <> " " OR NOT wasLastSpace then append c to t wasLastSpace = (c = " ") end return t end Code below is a C# console application which implements the function for removing multiple spaces from the string and lets the user enter texts to process. using System; namespace RemovingMultipleSpaces { public class Program { static string RemoveMultipleSpaces(string s) { StringBuilder result = new StringBuilder(); bool wasLastSpace = false; for (int i = 0; i < s.Length; i++) { char c = s[i]; if (c != ' ' || !wasLastSpace) result.Append(c); wasLastSpace = (c == ' '); } return result.ToString(); } static void Main(string[] args) { string line = ReadLine(); while (line.Length > 0) { string modified = RemoveMultipleSpaces(line); Console.WriteLine("Modified text: \"" + modified + "\""); Console.WriteLine(); line = ReadLine(); } } static string ReadLine() { Console.Write("Text to process (ENTER to quit): "); return Console.ReadLine(); } } } When console application is run, it produces output like this: Text to process (ENTER to quit): removing multiple spaces is so simple Modified text: "removing multiple spaces is so simple" Text to process (ENTER to quit): yet another sunny day Modified text: " yet another sunny day " Text to process .
http://codinghelmet.com/exercises/removing-multiple-spaces
CC-MAIN-2019-04
refinedweb
473
61.46
SMS framework with pluggable providers. Key features: SMSframework supports the following Also see the full list of providers. SMSframework handles the whole messaging thing with a single Gateway object. Let's start with initializing a gateway: var smsframework =;var gateway = ; The Gateway() constructor currently has no arguments. A Provider is a package which implements the logic for a specific SMS provider. Each provider reside in an individual package smsframework-*. You'll probably want to install some of these first. Arguments: provider: Stringis the name of the provider package. alias: Stringis the provider alias: an arbitrary string that uniquely identifies the provider instance. You'll use this string in order to send messages via a specific provider. config: Objectis the Provider-dependent configuration object. Refer to the provider documentation for the details. When a package is require()d, it registers itself under the smsframework.providers namespace. You don't need to require providers manually, as SMSframework does this for you. gateway; // package 'smsframework-provider1'gateway; // package 'smsframework-provider2' The first provider becomes the default one, unless you use Message Routing. An alternative syntax to add providers in bulk. Arguments: providers: Array.<{ provider: String, alias: String, config: Object? }> is an array that specifies multiple providers: provider: 'provider1' alias: 'primary' config: {}provider: 'provider2' alias: 'secondary' config: {} This works perfectly when your providers are defined in the configuration. Consider using js-yaml: providers:- { provider: 'provider1', alias: 'primary', config: { apitoken: '123456' } }- { provider: 'provider2', alias: 'secondary', config: { apitoken: '987654' } } Then, in the application: var config = yaml;gateway; Get a provider instance by its alias. You don't normally need this, unless the provider has some public API: see provider documentation. To send a message, you first create it with Gateway.message(to, body) which returns a fluid interface object. Arguments: to: String: Recipient number body: String: Message body Properties: message: OutgoingMessage: The wrapped Outgoing Message object Methods: send():Q: Send the message. The method returns a promise for OutgoingMessage which is resolved when a message is sent, or rejected on sending error. See Handling Send Errors. from(from: String): Set the message originating number. Is used to pick a specific source number if the gateway supports that. provider(provider: String): Choose the provider by alias. If no provider is specified - the first one is used to deliver the message. route(..values): Specify routing values. See Message Routing. options(options: OutgoingMessageOptions): Specify sending options: allow_reply: Boolean: Allow replies for this message. Default: false status_report: Boolean: Request a delivery report. Default: false. See: status expires: Number?: Message validity period in minutes. Default: none. senderId: String?: SenderID to replace the source number. Default: none. NOTE: This advanced feature is not supported by all providers! Moreover, some of them can have special restrictions. escalate: Boolean: Is a high-priority message: these are delivered faster, at a higher price. Default: false. params(params: Object): Specify provider-dependent sending parameters: refer to the provider documentation for the details. All the above methods are optional, you can just send the message as is: gateway; // using the default provider Here's the full example: var smsframework =;var gateway = ;gateway;gateway// use the named provideroptionsallow_reply: truestatus_report: falseexpires: 60senderId: 'smsframework'// Handle success; If you dislike promises, you can always get back to the old good NodeJS-style callbacks: gateway; When you send() a message, the promise may resolve to an error. The error object is provided as an argument to the callback function, and can be one of the following: Error: unknown provider specified. You may have a typo, or the provider package is missing. Error: runtime error occurred somewhere in the code. Rare. smsframework.errors.SendMessageError: An advanced error object. Has the codefield which defines the error conditions. See smsframework.errors.SendMessageError for the list of supported error codes. Example: gateway; The Gateway object is an EventEmitter which fires the following events: Outgoing Message: a message is being sent. Arguments: message: OutgoingMessage: The message being sent. See OutgoingMessage. NOTE: It is not yet known whether the message was accepted by the Provider or not. Also, the msgid and info fields are probably not populated. Outgoing Message: a message that was successfully sent. Arguments: message: OutgoingMessage: The message being sent. See OutgoingMessage. The message object is populated with the additional information from the provider, namely, the msgid and info fields. Incoming Message: a message that was received from the provider. Arguments: message: IncomingMessage: The received message. See IncomingMessage. Message Status: a message status reported by the provider. A status report is only delivered when explicitly requested with options({ status_report: true }). Arguments: status: MessageStatus: The status info. See MessageStatus. Error object reported by the provider. Arguments: error: Error|SendMessageError: The error object. See Error Objects. Useful to attach some centralized logging utility. Consider winston for this purpose. Events are handy, unless you need more reliability: if an event-handler fails to process the message, the Provider still sends 'OK' to the SMS service.. and the message is lost forever. Functional handlers solve this problem: you register a callback function that returns a promise, and in case the promise is rejected - the provider reports an error to the SMS provider so it retries the delivery later. Example: // Handler for incoming messagesgateway;// Handler for incoming status reportsgateway; Whenever any of the handlers fail - the Provider reports an error to the SMS service, so the data is re-sent later. Subscribe a callback to Incoming Messages. Can be called multiple times. Arguments: callback: function(IncomingMessage):Q: A callback that processes an Incoming Message. If it returns a rejection - the Provider reports an error to the SMS service. Subscribe a callback to Message Statuses. Can be called multiple times. Arguments: callback: function(IncomingMessage):Q: A callback that processes a Message Status. If it returns a rejection - the Provider reports an error to the SMS service. SMSframework uses the following objects to represent message flows. A messsage received from the provider. Source: lib/data/IncomingMessage.js. A message being sent. Source: lib/data/OutgoingMessage.js. A status report received from the provider. Source: lib/data/MessageStatus.js. Source: lib/data/MessageStatus.js. The Gateway has an internal express application, which is used by all providers to register their receivers: HTTP endpoints used to interact with the SMS services. Each provider is locked under the /<alias> prefix. The resources are provider-dependent: refer to the provider documentation for the details. The recommended approach is to use /im for incoming messages, and /status for status reports. To use the receivers in your application, use Gateway.express() method which returns an express middleware: var smsframework =express =;// Gatewayvar gateway = ;gateway; // provider, alias 'primary'// Init expressvar app = ;app; // mount SMSframework middleware under /smsapp; // start// Ready to receive messagesgateway; Assuming that the provider declares a receiver as '/receiver', we now have a '' path available. In your Clickatell admin area, add this URL so Clickatell passes the incoming messages to us. Sugar to start listening on the specified network address immediately. Footprints (see net.Server): Returns: a promise for the http.Server object. Example: gateway; It's adviced to mount the receivers under some difficult-to-guess path: otherwise, attackers can send fake messages into your system Secure example: app; NOTE: Other mechanisms, such as basic authentication, are not typically useful as some services do not support that. SMSframework requires you to explicitly specify the provider for each message, or uses the first one. In real world conditions with multiple providers, you may want a router function that decides on which provider to use and which options to pick. In order to achive route() function to specify these values: gateway; Now, set a router function: a function which gets an outgoing message + some additional routing values, and decides on the provider to use: gateway;gateway;gateway;gateway; Router function is also the right place to specify message options & parameters. To unset the router function, call gateway.setRouter() with no arguments. The following providers are bundled with SMSframework and thus require no additional packages. Source: lib/providers/null.js The 'null' provider just ignores all outgoing messages. Example: gw; // provider, alias Source: lib/providers/log.js The 'log' provider just logs all outgoing messages without sending them anywhere. Config: log: function(OutgoingMessage): The logger function. Defalt: log message destination & text to the console. Example: gw; Source: lib/providers/loopback.js The 'loopback' provider is used as a dummy for testing purposes. It consumes all messages, supports delivery notifications, and even has a '/im' HTTP receiver. All messages flowing through it get incremental msgids starting from 1. LoopbackProvider stores all messages that go through it. To get those messages, call .getTraffic(). This method empties the message log. gateway;gateway; Simulate an incoming message. Arguments: from: String: Source number body: String: Message text Returns: a promise for a message processed by SMSframework gateway;gateway; Register a virtual subscriber which receives messages to the matching number. Arguments: sim: String: Subscriber phone number callback: function(from: String, body: String, reply:function(String):Q): A callback which gets the messages sent to this subscriber. The last argument is a convenience function to send a reply. It wraps LoopbackProvider.receive(). gateway;gateway;
https://www.npmjs.com/package/smsframework
CC-MAIN-2018-09
refinedweb
1,516
50.33
To demonstrate the high levels of interoperability and portability that can be available when you build and deploy applications in the cloud, I decided to build a fun little travel application. My app, which consists of two parts, leverages the user preferences stored in a user profile to present a map showing what lodging is available in an area. The first part manages user preferences using the MongoDB service available through Bluemix and exposes the resulting service to the external world through an API. The second part integrates the first part with external services in a web application. “A cloud application must be scalable, portable, and integrate easily with internal services. It should not be wearisome to provision or manage throughout its lifecycle.” Note: - After you click on Run the app, you can log in with any ID and password you want. - To fork the code for this exercise, after you click Get the code, click the EDIT CODE button in the upper right-hand corner (enter your DevOps Services credentials if you're not already logged in) and click the FORK button on the menu to create a new project. Alternatively, you can export the code by selecting the root folder, then select File > Export from the left navigation. WATCH:See IBM Bluemix in action What you'll need to build a similar app - A basic familiarity with the Spring MVC framework - Spring Tool Suite (or Eclipse IDE), with the Cloud Foundry plugin installed - MongoDB NoSQL database - A Google Maps API account for testing purposes - An Expedia Developer API account for testing purposes - jquery-ui-map, the Google Maps V3 plugin for jQuery, which simplifies the work with the Google Maps API Step 1. Create the cloud application I created and deployed this sample on Bluemix. For this sample, I chose Java with the Spring Framework. To create an application, it's enough to go to Bluemix and choose the type of application (Java stand-alone, Java Web, Ruby, etc.) — Java Web in our case. Click to see larger image Step 2. Install and use cf command-line tool You can manage the application with the Bluemix web interface or the command-line interface provided by the Cloud Foundry project. For this example, I chose the command-line interface cf. The command line allows you to deploy, associate service, control (start and stop) applications, and more. Download the CLI from GitHub and launch the installer. The installation result is an executable file: cf.exe. First, you must set the target API endpoint, then log in. Now you can list applications, services, and bound services. Step 3. Prepare the development environment This sample uses the MVC Spring Framework. The environment used is the Spring Tool Suite with the Cloud Foundry plugin. The tools and technologies used are: - Spring 3.1.1 - JDK 7 - Spring Tool Suite 3.4.0+ Cloud Foundry Integration for Eclipse 1.5.1 When you create a Java Web Project you want to deploy on a Cloud Foundry platform, you must add the Cloud Foundry nature to project. By doing so, you create the manifest.yml file that describes the app and its resource needs to the Cloud Foundry runtime. We are going to develop two parts to our application: - The first is the UserService. It exposes the API to manage the user info. It uses an internal cloud platform MongoDB, a popular NoSQL database, to realize the persistence of data. - The second is the MyVacations, which allows the logged-in user to search available hotels using some parameters. The UserService application provides the values of some of these search parameters. Expedia Services provides the list and info of hotels. Google Maps API Web Services positions the list of hotels on a map. The UserService shows how to store information about users in a central location using the MongoDB service. The UserService functions are: - The logging function — The UserService receives the user name and password, and searches for the user in the database. If it finds the user, UserService returns it; otherwise, it creates a new record. - The profiling function — The UserService receives the user name and searches the user information (preferred location, number of adults, number of children) and returns them to the client. Step 4. Bind a cloud service (MongoDB) To use a MongoDB service, you must first create an instance of the service: - Connect to Bluemix and select Add a Service in the dashboard view. Click to see larger image - Select MongoDB from the list of available services and create the service instance. Click to see larger image Alternatively, you can use the command line: cf create-service mongodb 100 mongodb_ser1 (USAGE: cf create-service SERVICE PLAN SERVICE_INSTANCE) cf bind-service UserService mongodb_ser1 (USAGE: cf bind-service APP SERVICE_INSTANCE) Now a MongoDB service instance is ready and bound to UserService. Step 5. Use MongoDB service in the application After creating the service and associating it to your application, its configuration is added to VCAP_SERVICES as a read-only environment variable that contains information you can use in code to connect to your services. In this case," } } ] } To connect to the MongoDB service instance, using the VCAP_SERVICES variable, you extract "url" JsonNode. Notice that the URL contains all the parameters to connect to the database (user credentials, host name, port, DB name). private static String getUrlConnection() {; } You can create a Spring configuration class to create a DB connection, UserManager class uses the MongoConfiguration to interact with DB. The init method gets the "users" collection or creates one if it doesn't exist. private void init(){ ApplicationContext ctx = new AnnotationConfigApplicationContext(MongoConfiguration.class); db = (DB) ctx.getBean("mongoDb"); coll = db.getCollection("users"); } Let's say that MongoDB is not a relational DBMS but oriented to the document. This means that instead of having tables, we have collections; instead of having rows (or tuples), we have documents; and instead of columns, we have fields. These fields are not predefined, as is the case for the columns in a table. You can enter in a collection any kind of data. To find a document, you must create a BasicDBObject. BasicDBObject user = new BasicDBObject("username", userData.getUsername()); return (DBObject)coll.findOne(user); UserController is the UserManager client, and it uses its functions to get and save(); } Step 6. Deploy a Spring application MyVacations uses UserService to get user information to profile the search with the values that the UserService has saved during the user's last login. The user can visualize, on a map, a set of hotels resulting from the search. Also in this case, the controllers are not configured in the xml config file but are dynamically detected by the Spring Framework, thanks to this directive contained in the servlet configuration file. <context:component-scan The first controller, HomeController, is called to display the login page. /** * Simply selects the home view to render login page. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { return "login"; } The HotelsController activates upon submission of the login page. This controller, called through an HTTP get request with the "username" parameter, accesses UserService to get any user preferences saved during the last access by this user. This controller uses the RestTemplate to make a RESTful call to UserService. RestTemplate is a helper Spring class for client-side HTTP access. The objects are passed to and returned from the methods getForObject(), and are converted to HTTP requests and from HTTP responses by HttpMessageConverters. In our sample, this class is used to call the RESTful services, such as); Step 7. Integrate services You will want to integrate services to bring more data to MyVacations. I'm going to provide two examples of integrating services: - The Expedia RESTful service to bring in information on hotels. - The Google map service to visualize the hotels on a map. Let's integrate the Expedia web services first to get the information about hotels. First, you must go to Expedia API Developer to register for a developer account and an API key. The SearchController activated by the Submit button click is an Ajax call in Hotels.jsp. $.get('/search', $("#ricerca").serialize(), function (data, status) { if (status == 'success') { if (data.toString()==""){ $("#map_canvas").hide(); In SearchController, there is the call to the ExpediaClient to get the HotelSummary List. List<HotelSummary> response=null; response = (new ExpediaClient()).getHotels( location, dateFrom, dateTo, numAdults, numChildren); The ExpediaClient, using the user information obtained by UserService, extracts the ExpediaObjects decoding the Expedia JSON response. Listing 1. Extracting a list of hotels ExpediaObjects hotels= (ExpediaObjects) restTemplate.getForObject(buildQueryHotelsURL(location,dal,al,numAdulti,numBambini), ExpediaObjects.class); The sample then uses Google map service to visualize, on a map, the hotel list obtained from Expedia. The Google Maps API lets you embed a Google map image on your web page. Before you start, you will need a specific API key from Google. The key is free, but you must create a Google account. <script src="? key=YOUR_API_KEY&sensor=TRUE_OR_FALSE"></script> For interacting with the Google Maps API, I chose jQuery-ui-map. This is a good jQuery plugin to embed maps in web and mobile applications. It allows you to view maps and markers, and to take advantage of advanced services and the management of trails, the view in street-view mode, and dynamic loading of geographic data represented in JSON. After you create a div or another similar HTML container, it is time to launch gmap — the key method of the plugin that allows us to invoke the functions of the Google Maps API — to feed to the display the coordinates of a marker reference on the map. var map = $('#map_canvas').gmap({ 'center': new google.maps.LatLng(data[0].latitude,data[0].longitude), 'minZoom': 5, 'zoom': 8 }); We have created a map centered on the first hotel's geographic coordinates. Now, for every hotel in the result list, we create a marker and just place it on the map. $.each(data, function (i, m) { $('#map_canvas').gmap('addMarker', { 'position': new google.maps.LatLng(m.latitude, m.longitude), 'bounds': true }) .click(function () {… Register a function on-click event to load an info window with the short hotel description. $('#map_canvas').gmap('openInfoWindow', { 'content': descr }, Step 8. Push the application to the cloud After building and creating the applications, we are ready to deploy them on Bluemix. Deployment is an automated matter of moving the apps from the local VM to a cloud-based VM. Use the Cloud Foundry CLI cf push command to initiate the deployment. (In Cloud Foundry documentation, the deployment process is often referred to as pushing an application.) cf push MyVacations –path MYDIR\MyVacations.war The push command performs a variety of staging tasks, such as finding a container to run the application, provisioning the container with the appropriate software and system resources, starting one or more instances of the application, and storing the expected state of the application in the Cloud Controller database. Step 9. Test the application For testing purposes, you'll want to do a simple run test on the application, then you'll want to test to see if the application is portable. For simplicity, the login page allows the access to any user name and password, and if the user is not yet present in the system, it is created. After you log in, go to the search page, insert the search parameters and click Search. Click to see larger image If you click one of the markers, brief information about that hotel is displayed. Click to see larger image To test the portability of my application on different cloud platforms, I chose to deploy MyVacations on Pivotal platform and Google App Engine. To deploy on GoogleAE, the tools and technologies used are: - Google App Engine Java SDK 1.8.8 - Spring 3.1.1 - Eclipse 4.2+ Google plugin for Eclipse Because Google App Engine supports the Java web application based on Spring Framework, my application doesn't need any changes. The Google App Engine SDK (installed on Eclipse) includes a web server for testing your application in a simulated local environment, so you can test the application without a Google user account. (It is also possible to run the application on a remote Google server). The Google plugin for Eclipse adds items to the Run menu for starting this server. In this scenario, the MyVacations installed on Google App Engine calls the UserService application installed on Bluemix through its RESTful API, which demonstrates the high level of portability of an application that doesn't use internal platform services. Click to see larger image The tools and technologies to use to deploy on Pivotal are: - Spring 3.1.1 - JDK 7 - Spring Tool Suite 3.4.0+ Cloud Foundry Integration for Eclipse 1.5.1 The Cloud Foundry plugin for Eclipse enables you to deploy on the Pivotal platform. In this way, you can test the application directly on the target environment, without leaving the IDE. You need a valid Pivotal user account. Click to see larger image On this platform, you can deploy the application with no changes, and you can deploy the UserService application with minor changes. Conclusion This app shows just some of the possibilities of integrating internal and external services with a cloud application. I took advantage of some of the positive attributes that ClueMix offers: - Reduced provisioning needs (app or infrastructure) - Scalability - Easy integration of internal services - Eased management - Portability on similar cloud platforms On the portability issue, because Bluemix is based on Cloud Foundry, you have the freedom to move it to other platforms. Let me demonstrate portability with two examples: - Deploying MyVacations to the Pivotal cloud requires no changes Pivotal is also based on Cloud Foundry, so the compatibility is almost total. UserService uses a MongoDB service on the Bluemix platform; on Pivotal, there is a similar MongoDB service available, but you must change the URL connection from the VCAP_SERVICES variable in the MongoConfiguration class. - For a non-Cloud Foundry-based cloud (say, in this case, Google's cloud), it's still pretty easy to deploy MyVacations. (MongoDB is not among the services provided by the Google platform, so I chose another solution as the "Big Table" service, which — unlike MongoDB — is a proprietary solution for data persistence.) You simply add appengine-web.xmlfileto the web-infdirectory to enable the Google Application Engine, like so: <?xml version="1.0" encoding="utf-8"?> <appengine-web-app <application>_your_app_id_</application> <version>1</version> <threadsafe>true</threadsafe> </appengine-web-app> On a final note, you must design a cloud application to exploit the possibilities of a distributed platform and provide reliable, efficient, and fast service. For the most part, I think I accomplished this with the MyVacations app. Of course, when UserService gets hit with a large number of requests, it could result in slow response times for users. However, you can experiment with performance tweaks for that (like using asynchronous messaging to decouple the components so a task isn't blocked until a response is received). There are several performance tuning tricks to apply to this type of application, have a good time experimenting with them. Acknowledgment I want to thank Fabio Castiglioni for his encouragement and review of this article. RELATED TOPICS:Cloud computingJava technologyMongoDB
http://www.ibm.com/developerworks/java/library/j-myvacation-app/index.html
CC-MAIN-2015-32
refinedweb
2,542
54.12
I am creating a WCF duplex callBack structure, with a WPF app registering with a WCF service, and the WCF service using a CallBack in the WPF app to pass data from time to time. The interface for the callback in WCF is simple: void DataUpdate(string updatedData) yet when I launch the service and run SVCUtil.exe, the CS code changes the callBack interface to void DataUpdate(DataUpdate request) What is the cause of this? How can I get the proper structure to show in the SVCUtil output? Thanks. John View Complete Post OK so the Reflection namespace has tools so you can tell if a method has a base definition in a class (MethodInfo.GetBaseDefinition().DeclaringType). This way for instance I can query the ToString method of my favorite class and see that yes, indeed this inherits from System.Object. I also can detect if a method is an explicit interface implementation, by checking the name of the method (methods representing explicit interface implementation will have the fully qualified name of the interface pre-pended to the value of the Name property) so I'm okay there. But what about plain ole interface implementations? How do I get this info? Thanks if you can help! Hi All, I am trying to implement sorting in list using IComparer Interface. If any two values are equal, then there order is not maintained in the list after using sort function. Say, I am sorting an Employee object based on Department ID like, Employee 1 -> Dept ID = 100, Name = "John" Employee 2 -> Dept ID = 100, Name = "Peter" Employee 3 -> Dept ID = 100, Name = "Wills" The output of this after sorting based on Department ID should be 100, John 100, Peter 100, Wills but the output is where the order is getting changed if the values in the sort column are equal. Is it possible to maintain the order if the elements are equal. Hi, I really don't understand the physical meaning of "callback" in the real world although a lot of code there. Can somebody give me a real world example to explain it?(not code itself)? Thank you very much.? For handy database interface, is it good to use SqldataSources exclusively intead of EntLib functions? Thanks for any feedback.. I ;-) Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/66677-svcutil-changing-callback-interface-method.aspx
CC-MAIN-2016-44
refinedweb
395
61.36
Support for wsgi.file_wrapper¶ Waitress supports the Python Web Server Gateway Interface v1.0 as specified in PEP 3333. Here's a usage example: import os here = os.path.dirname(os.path.abspath(__file__)) def myapp(environ, start_response): f = open(os.path.join(here, 'myphoto.jpg'), 'rb') headers = [('Content-Type', 'image/jpeg')] start_response( '200 OK', headers ) return environ['wsgi.file_wrapper'](f, 32768) The file wrapper constructor is accessed via environ['wsgi.file_wrapper']. The signature of the file wrapper constructor is (filelike_object, block_size). Both arguments must be passed as positional (not keyword) arguments. The result of creating a file wrapper should be returned as the app_iter from a WSGI application. The object passed as filelike_object to the wrapper must be a file-like object which supports at least the read() method, and the read() method must support an optional size hint argument and the read() method must return bytes objects (never unicode). It should support the seek() and tell() methods. If it does not, normal iteration over the filelike_object using the provided block_size is used (and copying is done, negating any benefit of the file wrapper). It should support a close() method. The specified block_size argument to the file wrapper constructor will be used only when the filelike_object doesn't support seek and/or tell methods. Waitress needs to use normal iteration to serve the file in this degenerate case (as per the WSGI pec), and this block size will be used as the iteration chunk size. The block_size argument is optional; if it is not passed, a default value 32768 is used. Waitress will set a Content-Length header on behalf of an application when a file wrapper with a sufficiently file-like object is used if the application hasn't already set one. The machinery which handles a file wrapper currently doesn't do anything particularly special using fancy system calls (it doesn't use sendfile for example); using it currently just prevents the system from needing to copy data to a temporary buffer in order to send it to the client. No copying of data is done when a WSGI app returns a file wrapper that wraps a sufficiently file-like object. It may do something fancier in the future.
https://docs.pylonsproject.org/projects/waitress/en/latest/filewrapper.html
CC-MAIN-2018-47
refinedweb
372
55.03
Please confirm that you want to add XML Tutorial: Create, Validate And Transform XML Documents to your Wishlist. XML programming course for beginners with examples. This XML tutorial aims to get you started with the foundations of XML development. Did you know that if you google "XML" it returns more than 500.000.000 results? That's more than what you get by googling "NBA"... You can improve your programming skills and curriculum vitae with a small effort by completing this course in just a few a practical approach in the form of examples. Also, the fact that the course is shorter than five hours means that you will achieve an intermediate knowledge of XML in less than five XML documents ASAP! What are you waiting for? Join the course now and improve your programming skills in less than five hours!! Topics covered by Feb 2nd 2016: [Updated: Feb 18th 2016] [Updated: March 19th 2016] [Updated: April 11th 2016] [Updated: April 15th 2016] [Updated: August 20th 2016] [Updated: November 26th 2016] By the end of this lecture you will know why it is important to know XML, who is Eduardo Marchuet and what his experience with XML. We'll quickly go thorugh the install process of the free editor we'll be using during the course. In this lecture you will learn the basic tags you need to compose your first XML document. By the end of this lecture you will know the rules to write a well-formed XML document and how to check it yourself. Let's check the basics! In this firs lecture covering Document Type Definitions you will learn to apply structure and constraints to your XML document. In this second lecture covering Document Type Definitions we will add extra concepts and applications of DTDs to our example. A quick review on what we learnt about DTDs. General Entities are bricks you can use to build an XML wall, in this lesson you will get to know them using, as always, an example. Parameter Entities are often lost in the infinity. Here I will introduce them to you and show you when they can be useful. Just a couple of questions about entities... By the end of this lesson you will know what are namespaces, how and when to use them. Schemas are another way to add constraints and structure to your XML documents, in this lesson you will learn how to create them and link them to your XML document. In this lecture we go over the XML data types tree so that at the end of the video you are able to research by yourself what data types are the best suited to be used in your XML Schemas. Did you learn anything at all about schemas? Lets find out... Before we start the demo we need to have a basic but good XML document to transform. That's what we'll do here. We will take a look at the HTML that we want to generate in order to make the next steps easier. In this lecture we will create a very basic set of transformation rules that will generate the expected HTML as result. This is the final step of the demo. Here we finally see the result of the transformation and analyze the HTML output. In this lesson we will learn the concept of "layout" in the specific context of Android Application Development, we will compare the two different ways we have to generate layouts and we will go through some very basic examples of XML code with the corresponding screen output. Here we'll take a sneak peek at how XML attributes work in Android Apps , common layouts in Android Apps and a few samples with their corresponding screen output. In this lesson we'll see what is XQuery, what it is XQuery used for and mention a few examples of real life applications. Here we go step by step through the process of installing the XQuery editor that we will be using during this section. We need a database to run our queries against. Let's find and explore one! In this lecture we build a few very basic queries and run them against the database. Just a little bit more of complexity. FLWR queries are the most used kind of query among XQuery developers, here we discover what's all the fuzz about! Often times we need to transform the data we retrieve so that it is structured in a very different way or even to build completely different files like XHTML, HTML, XSLT, XML or any other kind of document. Here we see the principle behind how all this can be achieved. In this lesson we will experiment with a few functions that help us to handle and work with the value of the nodes and also we will see how a conditional statement is structured. We'll see how to save the output generated by our query into a file at a specific path. I have built a list of around 70 interview questions that you could potentially face at a job interview. I just want you to be aware of the sort of questions you may have to answer. I'll go through the answers without getting into depth, the goal is for you to check whether you are ready or not to face them. XML Spy is probably the most popular paid tool for XML edition. In this lesson I will show you how to download and install it. Yes... the final lecture. Here I will quickly review what we have learnt in the course and say good bye... for now :) Hello.
https://www.udemy.com/xml-tutorial-a-complete-and-practical-guide/
CC-MAIN-2017-34
refinedweb
948
70.53
You begin by creating an initial application using the Angular CLI. Throughout this tutorial, you’ll modify and extend that starter application to create the Tour of Heroes app. In this part of the tutorial, you'll do the following: For the sample app that this page describes, see the live example. To set up your development environment, follow the instructions in Local Environment Setup. You develop apps in the context of an Angular workspace. A workspace contains the files for one or more projects. A project is the set of files that comprise an app, a library, or end-to-end (e2e) tests. For this tutorial, you will create a new workspace. To create a new workspace and an initial app project: Ensure that you are not already in an Angular workspace folder. For example, if you have previously created the Getting Started workspace, change to the parent of that folder. Run the CLI command ng new and provide the name angular-tour-of-heroes, as shown here: ng new angular-tour-of-heroes The ng new command: angular-tour-of-heroes. angular-tour-of-heroes(in the srcsubfolder). The initial app project contains a simple Welcome app, ready to run. Go to the workspace directory and launch the application. cd angular-tour-of-heroes ng serve --open The ng servecommand builds the app, starts the development server, watches the source files, and rebuilds the app as you make changes to those files. The --openflag opens a browser to. You should see the app running in your browser. The page you see is the application shell. The shell is controlled by an Angular component named AppComponent. Components are the fundamental building blocks of Angular applications. They display data on the screen, listen for user input, and take action based on that input. Open the project in your favorite editor or IDE and navigate to the src/app folder to make some changes to the starter app. You'll find the implementation of the shell AppComponent distributed over three files: app.component.ts— the component class code, written in TypeScript. app.component.html— the component template, written in HTML. app.component.css— the component's private CSS styles. Open the component class file ( app.component.ts) and change the value of the title property to 'Tour of Heroes'. title = 'Tour of Heroes'; Open the component template file ( app.component.html) and delete the default template generated by the Angular CLI. Replace it with the following line of HTML. <h1>{{title}}</h1> The double curly braces are Angular's interpolation binding syntax. This interpolation binding presents the component's title property value inside the HTML header tag. The browser refreshes and displays the new application title. Most apps strive for a consistent look across the application. The CLI generated an empty styles.css for this purpose. Put your application-wide styles there. Open src/styles.css and add the code below to the file. /*; } Here are the code files discussed on this page. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Tour of Heroes'; } <h1>{{title}}</h1> /*; } © 2010–2020 Google, Inc. Licensed under the Creative Commons Attribution License 4.0.
https://docs.w3cub.com/angular~10/tutorial/toh-pt0
CC-MAIN-2021-10
refinedweb
542
59.9
chromium / chromium / src / ed2e5ba73da9ffdc2a3a357a6fa9dcfeb1611994 / . / build / util / android_chrome_version.py blob: eb58b2f97e9129a52e2b9f67b7534cf7e8cc199a [ file ] [ log ] [ blame ] # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. """Different build variants of chrome for android have different version codes. Reason: for targets that have the same package name (e.g. chrome, chome modern, monochrome, trichrome), Play Store considers them the same app and will push the supported app with the highest version code to devices. (note Play Store does not support hosting two different apps with same version code and package name) Each key in this dict represents a unique version code that will be used for one or more android chrome apks. Webview channels must have unique version codes for a couple reasons: a) Play Store does not support having the same version code for different versions of a package. Without unique codes, promoting a beta apk to stable would require first removing the beta version. b) Firebase project support (used by official builders) requires unique [version code + package name]. We cannot add new webview package names for new channels because webview packages are whitelisted by Android as webview providers. WEBVIEW_STABLE, WEBVIEW_BETA, WEBVIEW_DEV are all used for standalone webview, whereas the others are used for various chrome apks. Note that a final digit of '3' for webview is reserved for Trichrome Webview. The same versionCode is used for both Trichrome Chrome and Trichrome Webview. """ ANDROID_CHROME_APK_VERSION_CODE_DIFFS = { 'CHROME': 0, 'CHROME_MODERN': 1, 'MONOCHROME': 2, 'TRICHROME': 3, 'NOTOUCH_CHROME': 4, 'WEBVIEW_STABLE': 0, 'WEBVIEW_BETA': 1, 'WEBVIEW_DEV': 2, } """The architecture preference is encoded into the version_code for devices that support multiple architectures. (exploiting play store logic that pushes apk with highest version code) Detail: Many Android devices support multiple architectures, and can run applications built for any of them; the Play Store considers all of the supported architectures compatible and does not, itself, have any preference for which is "better". The common cases here: - All production arm64 devices can also run arm - All production x64 devices can also run x86 - Pretty much all production x86/x64 devices can also run arm (via a binary translator) Since the Play Store has no particular preferences, you have to encode your own preferences into the ordering of the version codes. There's a few relevant things here: - For any android app, it's theoretically preferable to ship a 64-bit version to 64-bit devices if it exists, because the 64-bit architectures are supposed to be "better" than their 32-bit predecessors (unfortunately this is not always true due to the effect on memory usage, but we currently deal with this by simply not shipping a 64-bit version *at all* on the configurations where we want the 32-bit version to be used). - For any android app, it's definitely preferable to ship an x86 version to x86 devices if it exists instead of an arm version, because running things through the binary translator is a performance hit. - For WebView, Monochrome, and Trichrome specifically, they are a special class of APK called "multiarch" which means that they actually need to *use* more than one architecture at runtime (rather than simply being compatible with more than one). The 64-bit builds of these multiarch APKs contain both 32-bit and 64-bit code, so that Webview is available for both ABIs. If you're multiarch you *must* have a version that supports both 32-bit and 64-bit version on a 64-bit device, otherwise it won't work properly. So, the 64-bit version needs to be a higher versionCode, as otherwise a 64-bit device would prefer the 32-bit version that does not include any 64-bit code, and fail. - The relative order of mips isn't important, but it needs to be a *distinct* value to the other architectures because all builds need unique version codes. """ ARCH_VERSION_CODE_DIFF = { 'arm': 0, 'x86': 10, 'mipsel': 20, 'arm64': 30, 'x64': 60 } ARCH_CHOICES = ARCH_VERSION_CODE_DIFF.keys() """ "Next" builds get +5 last version code digit. We choose 5 because it won't conflict with values in ANDROID_CHROME_APK_VERSION_CODE_DIFFS """ NEXT_BUILD_VERSION_CODE_DIFF = 5 """For 64-bit architectures, some packages have multiple targets with version codes that differ by the second-to-last digit (the architecture digit). This is for various combinations of 32-bit vs 64-bit chrome and webview. The default/traditional configuration is 32-bit chrome with 64-bit webview, but we are adding: + 64-bit chrome with 32-bit webview + 64-bit combined Chrome and Webview (only one library) + (maybe someday 32-bit chrome with 32-bit webview) The naming scheme followed here is <chrome>_<webview>, e.g. 64_32 is 64-bit chrome with 32-bit webview. """ ARCH64_APK_VARIANTS = { '64_32': { 'PACKAGES': frozenset(['MONOCHROME', 'TRICHROME']), 'MODIFIER': 10 }, '64': { 'PACKAGES': frozenset(['MONOCHROME', 'TRICHROME']), 'MODIFIER': 20 } } def GenerateVersionCodes(version_values, arch, is_next_build): """Get dict of version codes for chrome-for-android-related targets e.g. { 'CHROME_VERSION_CODE': '378100010', 'MONOCHROME_VERSION_CODE': '378100013', ... } versionCode values are built like this: {full BUILD int}{3 digits for PATCH}{1 digit for architecture}{final digit}. MAJOR and MINOR values are not used for generating versionCode. - MINOR is always 0. It was used for something long ago in Chrome's history but has not been used since, and has never been nonzero on Android. - MAJOR is cosmetic and controlled by the release managers. MAJOR and BUILD always have reasonable sort ordering: for two version codes A and B, it's always the case that (A.MAJOR < B.MAJOR) implies (A.BUILD < B.BUILD), and that (A.MAJOR > B.MAJOR) implies (A.BUILD > B.BUILD). This property is just maintained by the humans who set MAJOR. Thus, this method is responsible for the final two digits of versionCode. """ base_version_code = '%s%03d00' % (version_values['BUILD'], int(version_values['PATCH'])) new_version_code = int(base_version_code) new_version_code += ARCH_VERSION_CODE_DIFF[arch] if is_next_build: new_version_code += NEXT_BUILD_VERSION_CODE_DIFF version_codes = {} for apk, diff in ANDROID_CHROME_APK_VERSION_CODE_DIFFS.iteritems(): version_code_name = apk + '_VERSION_CODE' version_code_val = new_version_code + diff version_codes[version_code_name] = str(version_code_val) if arch == 'arm64' or arch == 'x64': for variant, config in ARCH64_APK_VARIANTS.iteritems(): if apk in config['PACKAGES']: variant_name = apk + '_' + variant + '_VERSION_CODE' variant_val = version_code_val + config['MODIFIER'] version_codes[variant_name] = str(variant_val) return version_codes
https://chromium.googlesource.com/chromium/src/+/ed2e5ba73da9ffdc2a3a357a6fa9dcfeb1611994/build/util/android_chrome_version.py
CC-MAIN-2019-51
refinedweb
1,015
50.87
Using Telnet in Python To make use of Telnet in Python, we can use the telnetlib module. That module provides a Telnet class that implements the Telnet protocol. The Telnet module have several methods, in this example I will make use of these: read_until, read_all() and write() Telnet Script in Python Let's make a telnet script import getpass import sys import telnetlib HOST = "hostname" user = raw_input("Enter your remote account: ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until("login: ") tn.write(user + " ") if password: tn.read_until("Password: ") tn.write(password + " ") tn.write("ls ") tn.write("exit ") print tn.read_all() At ActiveState you can find more Python scripts using the telnetlib, for example this script. For more information about using the Telnet client in Python,:
https://www.pythonforbeginners.com/code-snippets-source-code/python-using-telnet
CC-MAIN-2020-16
refinedweb
126
57.77
netdb.h redefines "h_addr" Bug Description The attached code compiles allright (gcc test.c -o test) but when I run it segfaults : % ./test example.com 80 toto phocean.net resolv: phocean.net zsh: segmentation fault ./test phocean.net 80 Debug session (attached) shows that the resolv function is working as expected, but something on the stack gets crafted and it is unable to return to the main section. The weired thing is that it compiles and works very well on all others distro I had as virtual machines : Fedora 13 32bits & 64bits, openSUSE 11.2 64bits and Debian 5 32bits. I also took the binary from one of these VM and ran it on Ubuntu, and it worked. The binary from Ubuntu crashes anywhere else. That's why I presume it is libc6 related. Error in /var/log/messages : kernel: [81039.332829] test[25870]: segfault at 7fff6d799f39 ip 000000000040077f sp 00007fff7b852b40 error 6 in test[400000+1000] ProblemType: Bug DistroRelease: Ubuntu 10.04 Package: libc6 2.11.1-0ubuntu7.1 ProcVersionSign Uname: Linux 2.6.32-22-generic x86_64 Architecture: amd64 Date: Fri Jun 4 20:20:17 2010 InstallationMedia: Ubuntu 10.04 LTS "Lucid Lynx" - Release amd64 (20100429) ProcEnviron: PATH=(custom, user) LANG=fr_FR.utf8 SHELL=/bin/zsh SourcePackage: eglibc (actually, this may be a glibc issue, as originally reported; gcc-4.3 fails under lucid too) seen on dapper (gcc-4.0), and lenny (4.3) as well. seen on sid as well. the original test case works with -fno-stack- Does _not_ fail on 32bit... The problem is the netdb.h include file. This crashes: #include <netdb.h> #include <netinet/in.h> void resolv(void) { struct in_addr h_addr; *(unsigned int*)&h_addr = 0; } int main(int argc, char *argv[] ) { resolv(); return 0; } If netdb.h is removed, it's fine. This appears to be because the "h_addr" macro is retained, and rewrites the main body of the code: - struct in_addr h_addr; + struct in_addr h_addr_list[0]; - *(unsigned int*)&h_addr = 0; + *(unsigned int*)&h_ It seems that "h_addr" is a reserved name if netdb.h is included to support the old-style naming. From "man gethostbyname": */ Is this a bug in glibc, then, or user error? This is no more a supported version now Here's a more minimal test case. It looks like the compiler isn't correctly calculating function stack sizes when building without optimization. $ gcc -Wall test.c -o test -O1 $ ./test 74.125.127.104 $ gcc -Wall test.c -o test -O0 $ ./test 74.125.127.103 Segmentation fault (core dumped) Happens with gcc-snapshot in Maverick too: $ /usr/lib/ gcc-snapshot/ bin/gcc -Wall test.c -o test -O0 $ ./test 74.125.127.99 Segmentation fault (core dumped)
https://bugs.launchpad.net/ubuntu/+source/eglibc/+bug/589855
CC-MAIN-2016-40
refinedweb
453
70.39
10,543,200 members (58,404 Anoop Madhusudanan (Articles: 26, Technical Blogs: 2) Articles: 26, Technical Blogs: 2 Articles Technical Blogs Tips Reference Articles Average article rating: 4.83 ASP.NET General Server Side, Data Driven Image Rendering in ASP.NET MVC and XAML for Web Apps Posted: 22 Aug 2010 Updated: 3 Nov 2010 Views: 15,564 Rating: 4.87/5 Votes: 7 Popularity: 4.11 Licence: The Code Project Open License (CPOL) Bookmarked: 15 Downloaded: 0 The article explain Server side image rendering using data bound XAML. You'll be able to render WPF controls and create data-bound image visualizations using this technique ASP.NET MVC, SignalR, and Knockout based real time UI syncing - For co-working UIs and continuous clients Posted: 30 Jan 2012 Updated: 30 Jan 2012 Views: 68,691 Rating: 5.00/5 Votes: 19 Popularity: 6.39 Licence: The Code Project Open License (CPOL) Bookmarked: 99 Downloaded: 0 Demonstrates how to use ASP.NET MVC, SignalR, EF and Knockout Js to build real time syncing UIs Client side scripting General An introduction to Type Script Posted: 2 Oct 2012 Updated: 2 Oct 2012 Views: 13,289 Rating: 4.86/5 Votes: 4 Popularity: 2.91 Licence: The Code Project Open License (CPOL) Bookmarked: 13 Downloaded: 0 A quick introduction to Typescript - Classes, Callbacks, Interfaces, Function Types, and Object Types etc Silverlight Applications Silver Draw - A Silverlight Based Collaboration White Board with Drawing and Chat Posted: 1 Nov 2009 Updated: 2 Nov 2009 Views: 77,974 Rating: 4.98/5 Votes: 37 Popularity: 7.80 Licence: The Code Project Open License (CPOL) Bookmarked: 136 Downloaded: 2,762 Silver Draw shows how to use Silverlight and WCF Polling Duplex services to create realtime collaboration apps. General Silverlight Experimental Hacks (SLEX) - EventTrigger, PropertyTrigger, ReactiveTrigger, InvokeMethodAction, StoryBoardAction, etc. for Silverlight Posted: 3 Jan 2010 Updated: 14 Jan 2010 Views: 17,845 Rating: 4.81/5 Votes: 8 Popularity: 4.34 Licence: The Code Project Open License (CPOL) Bookmarked: 23 Downloaded: 176 A set of Silverlight Experimental Hacks (1) A custom implementation of EventTrigger and PropertyTrigger (2) Invoking methods in your view model in MVVM (3) Conditionally invoking triggers and behaviors (4) ReactiveTrigger for exporting your custom events Silverlight and WPF Behaviours and Triggers - Understanding, Exploring And Developing Interactivity using C#, Visual Studio and Blend Posted: 9 Feb 2010 Updated: 9 Feb 2010 Views: 39,837 Rating: 4.77/5 Votes: 23 Popularity: 6.49 Licence: The Code Project Open License (CPOL) Bookmarked: 47 Downloaded: 0 A good starting guide for Understanding, Exploring And Developing Silverlight and WPF Behaviors and Triggers - Using C#, Visual Studio and Blend Azure General Analyzing some ‘Big’ Data Using C#, Azure And Apache Hadoop – Analyzing Stack Overflow Data Dumps Posted: 5 Jun 2012 Updated: 5 Jun 2012 Views: 26,368 Rating: 4.82/5 Votes: 11 Popularity: 4.95 Licence: The Code Project Open License (CPOL) Bookmarked: 39 Downloaded: 0 Explains how to use Apache Hadoop and Azure to Analyze Large Data sets, using Map reduce jobs in C# Building A Recommendation Engine - Machine Learning Using Azure, Hadoop And Mahout Posted: 14 Jul 2013 Updated: 19 Jul 2013 Views: 18,159 Rating: 4.93/5 Votes: 11 Popularity: 5.17 Licence: The Code Project Open License (CPOL) Bookmarked: 31 Downloaded: 0 Doing some 'Big Data' and building a Recommendation Engine with Azure, Hadoop and Mahout C# General Fun with Dynamic Objects and MEF in C# 4.0 - A dynamic File System Wrapper Posted: 3 Sep 2009 Updated: 5 Sep 2009 Views: 38,411 Rating: 4.52/5 Votes: 9 Popularity: 4.31 Licence: The Code Project Open License (CPOL) Bookmarked: 30 Downloaded: 442 Exploring the exciting things we can do with DynamicObject in the System.Dynamic namespace and MEF, using .NET 4.0 and C#. Adventures with C# 4.0 dynamic - ExpandoObject, ElasticObject, and a Twitter client in 10 minutes Posted: 3 Mar 2010 Updated: 29 Mar 2010 Views: 87,461 Rating: 4.85/5 Votes: 45 Popularity: 7.98 Licence: The Code Project Open License (CPOL) Bookmarked: 103 Downloaded: 2,691 Explores the dynamic features in C# 4.0, and a few cool things you can do with the same. C# as a Scripting Language in Your .NET Applications Using Roslyn Posted: 24 Oct 2011 Updated: 24 Oct 2011 Views: 48,509 Rating: 4.96/5 Votes: 22 Popularity: 6.65 Licence: The Code Project Open License (CPOL) Bookmarked: 75 Downloaded: 0 Explains how to use C# as a scripting language in your .NET applications using Roslyn. Chain Of Responsibility Design Pattern in C#, using Managed Extensibility Framework (MEF) Posted: 13 Nov 2011 Updated: 13 Nov 2011 Views: 21,452 Rating: 5.00/5 Votes: 12 Popularity: 5.40 Licence: The Code Project Open License (CPOL) Bookmarked: 49 Downloaded: 0 This post is about implementing Chain Of Responsibility design pattern, and few possible extensions to the same using Managed Extensibility Framework or MEF Code Generation in Visual Studio From XML Files - A Simpler Approach Posted: 21 Feb 2012 Updated: 21 Feb 2012 Views: 18,744 Rating: 5.00/5 Votes: 11 Popularity: 5.21 Licence: The Code Project Open License (CPOL) Bookmarked: 39 Downloaded: 0 In this post, we'll explore how to generate code from a simple XML model, with in Visual Studio - For a lot of scenarios Reactive Programming for .NET and C# Developers - An Introduction To IEnumerable, IQueryable, IObservable, and IQbservable Posted: 1 Sep 2013 Updated: 1 Sep 2013 Views: 13,555 Rating: 5.00/5 Votes: 8 Popularity: 4.52 Licence: The Code Project Open License (CPOL) Bookmarked: 40 Downloaded: 0 Exploring Reactive Programming including a detailed look at Interactive and Reactive Extensions for .NET and C# developers. Utilities XGenPlus - A Flexible Tool to Generate Typed XML Serializers for your .NET Applications Posted: 12 Nov 2007 Updated: 12 Nov 2007 Views: 38,028 Rating: 4.70/5 Votes: 10 Popularity: 4.69 Licence: The Code Project Open License (CPOL) Bookmarked: 35 Downloaded: 456 XGenPlus is a flexible tool to generate typed XML serializers for your .NET applications. It provides more flexibility than the sgen.exe tool combining the efficiency offered by Mvp.Xml.Xgen library. VB.NET General Object Oriented Programming In VB.NET Posted: 18 Nov 2004 Updated: 18 Nov 2004 Views: 448,870 Rating: 4.60/5 Votes: 124 Popularity: 9.63 Licence: Not specified Bookmarked: 228 Downloaded: 4,775 A must read for anyone who is interested in VB.NET. This article uncovers some basic Object Oriented Programming features of Visual Basic .NET. The whole article is divided into 10 lessons. The source code for these lessons is provided with the article. .NET Framework Applications Brainnet 1 - A Neural Netwok Project - With Illustration And Code - Learn Neural Network Programming Step By Step And Develop a Simple Handwriting Detection System Posted: 22 May 2006 Updated: 4 Apr 2007 Views: 251,529 Rating: 4.73/5 Votes: 109 Popularity: 9.63 Licence: Not specified Bookmarked: 395 Downloaded: 5,535 This article is expected to (1) Demonstrate some practical uses neural network programming (2) Give you a fair idea regarding neurons, neural networks and their applications (3) Introduce BrainNet library - an open source Artificial Neural Network library I developed - mainly using the MS.NET CakeRobot - A C#, Arduino, Kinect Robot That Follows Your Gestures Posted: 22 Oct 2013 Updated: 22 Oct 2013 Views: 7,030 Rating: 4.95/5 Votes: 10 Popularity: 4.94 Licence: The Code Project Open License (CPOL) Bookmarked: 18 Downloaded: 0 CakeRobot is a gesture driven robot that moves around based on your hand movement. General NXML - Introducing an XML Based Language To Perform Neural Network Processing, Image Analysis, Pattern Detection Etc Posted: 8 Jun 2006 Updated: 21 Oct 2009 Views: 51,898 Rating: 4.80/5 Votes: 24 Popularity: 6.47 Licence: The Code Project Open License (CPOL) Bookmarked: 93 Downloaded: 915 Do some Brain Tumor detection using neural networks, in a very simple and easy manner. This is the story and source code of an XML based language, to help you create, train and run your own neural networks? Samples Learn How to Build a Provider Framework - With an Easy to Understand Example Towards Applying the Provider Pattern Posted: 8 Jan 2007 Updated: 20 Jul 2007 Views: 75,008 Rating: 4.53/5 Votes: 43 Popularity: 7.39 Licence: The Code Project Open License (CPOL) Bookmarked: 155 Downloaded: 536 After reading this article, you'll be able to: (1) Change your mindset a little bit, and start thinking about 'frameworks' instead of just 'code' (2) Understand a lot about practically applying the Provider pattern in your projects (3) Gain much knowledge regarding XML config files and providers. Windows Presentation Foundation Applications Accessing WPF apps over a Web browser via WebSockets and HTML5 Canvas Posted: 11 Apr 2011 Updated: 11 Apr 2011 Views: 25,312 Rating: 4.92/5 Votes: 19 Popularity: 6.28 Licence: The Code Project Open License (CPOL) Bookmarked: 64 Downloaded: 0 Shows how to push your WPF apps to users via WebSockets and HTML5 Canvas. General WPF Extensibility Hacks or WEX - Includes EventTrigger, ReactiveTrigger, InvokeMethodAction, InvokeCommandAction etc. Posted: 14 Jan 2010 Updated: 14 Jan 2010 Views: 14,797 Rating: 5.00/5 Votes: 6 Popularity: 3.89 Licence: The Code Project Open License (CPOL) Bookmarked: 24 Downloaded: 204 A set of extensibility hacks for WPF. A few interesting triggers and actions, including EventTrigger, ReactiveTrigger, InvokeMethodAction, and InvokeCommandAction. Also allows invoking Triggers and Actions based on Conditions. Algorithms & Recipes Neural Networks Designing And Implementing A Neural Network Library For Handwriting Detection, Image Analysis etc.- The BrainNet Library - Full Code, Simplified Theory, Full Illustration, And Examples Posted: 4 Jun 2006 Updated: 21 Oct 2009 Views: 194,246 Rating: 4.75/5 Votes: 89 Popularity: 9.26 Licence: The Code Project Open License (CPOL) Bookmarked: 339 Downloaded: 4,714 This article will explain the actual concepts and implementation of Backward Propagation Neural Networks very easily - see project code and samples, like a simple pattern detector, a hand writing detection pad, an xml based neural network processing language etc in the source zip. Design and Architecture Design Patterns Design Your Soccer Engine, and Learn How To Apply Design Patterns (Observer, Decorator, Strategy and Builder Patterns) - Part I and II Posted: 7 Nov 2005 Updated: 8 Jan 2007 Views: 242,806 Rating: 4.76/5 Votes: 153 Popularity: 10.39 Licence: The Code Project Open License (CPOL) Bookmarked: 448 Downloaded: 1,138 This article is expected to (1) Introduce patterns to you in a simple, human readable way (2) Train you how to really identify and apply patterns (3) Demonstrate step by step methods to solve a design problem using patterns Design Your Soccer Engine, and Learn How To Apply Design Patterns (Observer, Decorator, Strategy, and Builder Patterns) - Part III and IV Posted: 9 Nov 2005 Updated: 21 Oct 2009 Views: 84,178 Rating: 4.87/5 Votes: 69 Popularity: 8.94 Licence: The Code Project Open License (CPOL) Bookmarked: 135 Downloaded: 367 This article is a continuation of the previous article, and in this article, we will discuss (1) Applying the Strategy pattern to solve design problems related with 'Team' and 'TeamStrategy', and (2) Applying the Decorator pattern to solve design problems related with the 'Player'. General Application Architecture - Driving Forces, Approaches, and Implementation Considerations Posted: 15 Nov 2007 Updated: 9 Dec 2008 Views: 46,168 Rating: 4.58/5 Votes: 12 Popularity: 4.94 Licence: The Code Project Open License (CPOL) Bookmarked: 97 Downloaded: 0 This article discusses various driving forces, approaches, and implementation considerations involved in deciding the application architecture. There is no rocket science here - the whole objective is to aid you to decide an architecture that may suit your scenario. Average blogs rating: 4.50 C# General Creating A Fluent Interface in C# - For training a bunch of Dogs [Technical Blog] Posted: 3 Oct 2009 Updated: 3 Oct 2009 Views: 7,710 Rating: 4.00/5 Votes: 1 Popularity: 0.00 Licence: The Code Project Open License (CPOL) Bookmarked: 7 Downloaded: 0 A simple post on a simple subject String handling General Thinking Beyond ToString() [Technical Blog] Posted: 3 Oct 2009 Updated: 3 Oct 2009 Views: 4,942 Rating: 5.00/5 Votes: 1 Popularity: 0.00 Licence: The Code Project Open License (CPOL) Bookmarked: 4 Downloaded: 0 If you want to convert something to string, what is the best way? Here is a neat extension method for all your objects, so that it'll find the appropriate converter if one exists, or otherwise, fall back to ToString() No tips have been posted. No reference articles have been posted. Anoop Madhusudanan Architect India Architect, Developer, Speaker | Wannabe GUT inventor & Data Scientist | Microsoft MVP in C# | Tweets on JS, Mobile, C#, .NET, Cloud, Hadoop | Seeker. I'm In Twitter @amazedsaint BigData for .NET Developers Using Azure & Hadoop Hack Raspberry Pi to Build Apps In C#, Winforms and ASP.NET Changing Times For Web Developers - Responsive Design and 6 Tips You Need To Survive 7 Freely Available Ebooks For .NET developers 5 Back to Basics C# Articles - Fluent Interfaces, Expr Trees etc 3 Gems from Mono to spice up your .NET Apps Top 5 Common Mistakes .NET Developers Must Avoid 6 Cool VS2010 Tips you may find interesting 4 .NET 4.0 Libraries you *should* know about Follow on Twitter Advertise | Privacy | Mobile Web04 | 2.8.140415.2 | Last Updated 16 Apr 2014 CodeProject , 1999-2014 Layout: fixed | fluid
http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=1117033
CC-MAIN-2014-15
refinedweb
2,238
52.9
Today we will learn how to convert XML to JSON and XML to Dict in python. We can use python xmltodict module to read XML file and convert it to Dict or JSON data. We can also stream over large XML files and convert them to Dictionary. Before stepping into the coding part, let’s first understand why XML conversion is necessary. Table of Contents Converting XML to Dict/JSON XML files have slowly become obsolete but there are pretty large systems on the web that still uses this format. XML is heavier than JSON and so, most developers prefer the latter in their applications. When applications need to understand the XML provided by any source, it can be a tedious task to convert it to JSON. The xmltodict module in Python makes this task extremely easy and straightforward to perform. Getting started with xmltodict We can get started with xmltodict module but we need to install it first. We will mainly use pip to perform the installation. Install xmltodict module Here is how we can install the xmltodict module using Python Package Index (pip): pip install xmltodict This will be done quickly as xmltodict is a very light weight module. Here is the output for this installation: The best thing about this installation was that this module is not dependent on any other external module and so, it is light-weight and avoids any version conflicts. Just to demonstrate, on Debian based systems, this module can be easily installed using the apt tool: sudo apt install python-xmltodict Another plus point is that this module has an official Debian package. Python XML to JSON The best place to start trying this module will be to perform an operation it was made to perform primarily, to perform XML to JSON conversions. Let’s look at a code snippet on how this can be done: import xmltodict import pprint import json123</id> <name>Shubham</name> </audience> """ pp = pprint.PrettyPrinter(indent=4) pp.pprint(json.dumps(xmltodict.parse(my_xml))) Let’s see the output for this program: Here, we simply use the parse(...) function to convert XML data to JSON and then we use the json module to print JSON in a better format. Converting XML File to JSON Keeping XML data in the code itself is neither always possible nor it is realistic. Usually, we keep our data in either database or some files. We can directly pick files and convert them to JSON as well. Let’s look at a code snippet how we can perform the conversion with an XML file: import xmltodict import pprint import json with open('person.xml') as fd: doc = xmltodict.parse(fd.read()) pp = pprint.PrettyPrinter(indent=4) pp.pprint(json.dumps(doc)) Let’s see the output for this program: Here, we used another module pprint to print the output in a formatted manner. Apart from that, using the open(...) function was straightforward, we used it get a File descriptor and then parsed the file into a JSON object. Python XML to Dict As the module name suggest itself, xmltodict actually converts the XML data we provide to just a simply Python dictionary. So, we can simply access the data with the dictionary keys as well. Here is a sample program: import xmltodict import pprint import json123</id> <name>Shubham</name> </audience> """ my_dict = xmltodict.parse(my_xml) print(my_dict['audience']['id']) print(my_dict['audience']['id']['@what']) Let’s see the output for this program: So, the tags can be used as the keys along with the attribute keys as well. The attribute keys just need to be prefixed with the @ symbol. Supporting Namespaces in XML In XML data, we usually have a set of namespaces which defines the scope of the data provided by the XML file. While converting to the JSON format, it is then necessary that these namespaces persist in the JSON format as well. Let us consider this sample XML file: <root xmlns="" xmlns: <audience> <id what="attribute">123</id> <name>Shubham</name> </audience> </root> Here is a sample program on how we can include XML namespaces in the JSON format as well: import xmltodict import pprint import json with open('person.xml') as fd: doc = xmltodict.parse(fd.read(), process_namespaces=True) pp = pprint.PrettyPrinter(indent=4) pp.pprint(json.dumps(doc)) Let’s see the output for this program: JSON to XML conversion ALthough converting from XML to JSON is the prime objective of this module, xmltodict also supports doing the reverse operation, converting JSON to XML form. We will provide the JSON data in program itself. Here is a sample program: import xmltodict student = { "data" : { "name" : "Shubham", "marks" : { "math" : 92, "english" : 99 }, "id" : "s387hs3" } } print(xmltodict.unparse(student, pretty=True)) Let’s see the output for this program: Please note that giving a single JSON key is necessary for this to work correctly. If we consider that we modify our program to contain multiple JSON keys at the very first level of data like: import xmltodict student = { "name" : "Shubham", "marks" : { "math" : 92, "english" : 99 }, "id" : "s387hs3" } print(xmltodict.unparse(student, pretty=True)) In this case, we have three keys at the root level. If we try to unparse this form of JSON, we will face this error: This happens because xmltodict needs to construct the JSON with the very first key as the root XML tag. This means that there should only be a single JSON key at the root level of data. Conclusion In this lesson, we studied an excellent Python module which can be used to parse and convert XML to JSON and vice-versa. We also learned how to convert XML to Dict using xmltodict module. Como puedo guardar el documento xml después de leerlo json
https://www.journaldev.com/19392/python-xml-to-json-dict
CC-MAIN-2021-25
refinedweb
960
60.75
Understanding Permissions An important aspect of security on a computer system is the granting or denying of permissions (sometimes called access rights). A permission is the ability to perform a specific operation such as to gain access to data or to execute code. Permissions can be granted at the level of directories, subdirectories, files or applications, or specific data within files or functions within applications. Permissions in OS X are controlled at many levels, from the Mach and BSD components of the kernel, through higher levels of the operating system and, for networked applications, through the networking protocols. This chapter describes the basic permissions policies at various levels in OS X. Mach Port Rights At the deepest level of OS X system architecture, the basis for the operating system’s built-in security features is provided by the Mach and BSD components of the kernel. This section provides only a very brief and cursory introduction to Mach. For more information on Mach in OS X and Mach programming, see Kernel Programming Guide. Mach security is based on ports and port rights. A Mach port is an endpoint of a communication channel between a client who requests a service and a server that provides the service. Mach ports are unidirectional; a reply to a service request must use a second port. A port has a set of associated port rights, which are owned by tasks. A port right specifies that a particular task can use that port. Each port has one receive right, the owner of which can receive messages on that port. Each port has one or more send rights; the owners of send rights can send messages to the port. Rights can be transferred between tasks by attaching them to a Mach message. A single task (or other Mach object, such as a thread or the host itself) may have multiple ports that provide access to resources it supports. For example, a task might have a name port and a control port. Access to the control port allows the task to be manipulated. In contrast, access to the name port merely allows the client to obtain information about the task or perform other nonprivileged operations on it. Each process has a port right namespace, which maps small integers known as port right names to their corresponding port rights. A port right name is meaningful only within that task’s port right namespace. A task can transfer a port right to another task by sending it the corresponding port right name. However, unless it sends the name correctly, the receiving task won’t be able use the right. The only way to transmit a port right between two tasks is by sending a Mach message and attaching the right name to that message using the correct syntax and message structure.’s address space, and so forth. Therefore, whoever owns a send right for a task’s port effectively owns the task and can manipulate the task’s state without regard to BSD security policies or any higher-level security policies. In other words, an expert in Mach programming with local administrator access to an OS X machine can bypass BSD and higher-level security features. Therefore, it is very important to use strong administrator passwords, keep them secure, and control physical security for any computer containing sensitive information. BSD Security Policies The BSD portion of the OS X kernel enforces access to applications and files. The most familiar aspect of BSD security is the file system security policy, which controls access to files and directories. BSD file permissions are described in “File System Security Policy” and in more detail in File System Overview. In addition to the file system security policy, BSD defines two other security policies used in special cases: the owner or root security policy, and the root EUID security policy. Each of these policies is described briefly in the following subsections. The BSD security model is based on matching up attributes of a file system object (a file or directory) with attributes of the process attempting to gain access to that object. For example, suppose a file has an owning user ID of 1234 and the file permissions specify that the owning user has read and write access to that file. Suppose further that Alice has an effective user ID (EUID) of 1234. When Alice attempts to read the file, BSD matches her EUID with the file’s owning UID and grants Alice access to read the file. Each process has three user IDs: the real user ID (RUID), the effective user ID (EUID), and the saved user ID (SUID). The RUID is always inherited from the user or process that executes the process. The EUID is normally the same as the RUID, but it can differ in special circumstances as described in “Owner-or-Root Security Policy” below. In most cases, it is the EUID that BSD checks to determine permissions. The SUID is used by BSD to enable a privileged process to switch into and out of privileged mode. Each process also has real and saved group IDs (RGID and SGID) and up to 16 effective group IDs (EGIDs), which work in a way analogous to the process’s user IDs. For more details on these UIDs and GIDs, see The Design and Implementation of the 4.4 BSD Operating System, by Marshall Kirk McKusick and others.) Starting in OS X v10.5, the kernel includes an implementation of the TrustedBSD Mandatory Access Control (MAC) framework. A formal requirements language suitable for third-party developer use was added in OS X v10.7. Mandatory access control, also known as sandboxing, is discussed in “Sandboxing and the Mandatory Access Control Framework.” File System Security Policy OS X supports both the standard UNIX user-group-other permissions model (supplemented by BSD file flags) and POSIX access control lists (ACLs). In the user-group-other permission model, the access rights to a file depend on the effective user ID (EUID) and effective group ID (EGID) of the calling process as follows: If the application’s sandbox forbids the requested access, the request is denied. If ownership checking has been disabled for the volume in question by the system administrator (with a checkbox in its Finder Get Info window), the request is granted. If an access control entry exists on the file, it is evaluated and used to determine access rights. If a BSD file flag prohibits the operation, the operation is denied. Otherwise, if the user ID matches the owner of the file, the “user” permissions (also called “owner” permissions) are used. Otherwise, if the group ID matches the group for the file, the “group” permissions are used. Otherwise, the “other” permissions are used. For more details, read “OS X File System Security” in File System Programming Guide. Owner-or-Root Security Policy The owner-or-root security policy is used to control execution of a few specific operations. Under this policy, a specific operation on an object can be performed by any process whose EUID is the same as the object’s owner or whose EUID is zero (0). The user with a UID of zero is called the root user (also called the superuser), and a process running with an EUID of zero is said to be running as root. This policy is used in three primary places: Changing permissions on files with the chmodsystem call. Only the owner of the file or a process running as rootcan change a file’s permissions. Deleting or renaming files within a directory whose sticky bit is set. Only the owner of the file, the owner of the enclosing directory, and rootcan delete or rename the file. Sending signals to running processes (including killing the process). A process can only send a signal to another process if their EUIDs match or if the sending process has an EUID of 0. Root EUID Security Policy Under the root EUID security policy, an operation can be performed only by a process with an EUID of 0. Such operations are sometimes referred to as privileged operations. Some of the common situations where the root EUID security policy applies are: Changing the owner of a file system object Binding TCP/IP sockets to low-numbered ports Making changes to the network configuration Certain I/O Kit operations Getting the Mach host privileged special port Authorization Services and BSD Security Policies Because a process running with an EUID of 0 has many special privileges, such a process can be a target of malicious hackers. To minimize such risks, you should factor your application into privileged and nonprivileged processes. See “Using Authorization” for more information and for references that describe and illustrate this technique. Processes can change their EUID and EGID by calling setuid, setgid, and related system calls. For example, a process can run as root temporarily and then switch to a less privileged EUID to minimize exposure to malicious attacks. This technique is complicated by the confusing semantics of the setuid call and by the fact that these calls operate somewhat differently on different implementations of UNIX (including different versions of OS X). For a detailed discussion of the issues involved, see Setuid Demystified by Chen, Wagner, and Dean in Proceedings of the 11th USENIX Security Symposium, 2002, available at the USENIX website. For more information on the system calls, see the man pages for setuid, setreuid, and setregid. The setuid man page includes information about seteuid, setgid, and setegid as well. Sandboxing and the Mandatory Access Control Framework Sandboxing provides fine-grained control over the ability of processes to access system resources. For example, you can prevent a process from connecting to any network, from writing any files, or from writing any files outside of specific directories. This feature limits the amount of damage that can be done by a malicious hacker that gains control of an application. Under the hood, sandboxing support is provided by the OS X Mandatory Access Control (MAC) framework, which is an implementation of the TrustedBSD MAC framework. Copyright © 2003, 2013 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2013-01-28
https://developer.apple.com/library/mac/documentation/Security/Conceptual/AuthenticationAndAuthorizationGuide/Permissions/Permissions.html
CC-MAIN-2014-10
refinedweb
1,698
51.89
import "k8s.io/kubernetes/pkg/kubectl/explain" explain.go field_lookup.go fields_printer.go fields_printer_builder.go formatter.go model_printer.go recursive_fields_printer.go typename.go GetTypeName returns the type of a schema. LookupSchemaForField looks for the schema of a given path in a base schema. func PrintModel(name string, writer *Formatter, builder fieldsPrinterBuilder, schema proto.Schema, gvk schema.GroupVersionKind) error PrintModel prints the description of a schema in writer. func PrintModelDescription(fieldsPath []string, w io.Writer, schema proto.Schema, gvk schema.GroupVersionKind, recursive bool) error PrintModelDescription prints the description of a specific model or dot path. If recursive, all components nested within the fields of the schema will be printed. func SplitAndParseResourceRequest(inResource string, mapper meta.RESTMapper) (string, []string, error) SplitAndParseResourceRequest separates the users input into a model and fields Formatter helps you write with indentation, and can wrap text as needed. Indent creates a new Formatter that will indent the code by that much more. Write writes a string with the indentation set for the Formatter. This is not wrapping text. WriteWrapped writes a string with the indentation set for the Formatter, and wraps as needed. Package explain imports 6 packages (graph) and is imported by 13 packages. Updated 2019-05-12. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/kubernetes/pkg/kubectl/explain
CC-MAIN-2019-30
refinedweb
207
52.66
... CodingForums.com > :: Computing & Sciences > Computer Programming > how to get IP address PDA how to get IP address Aymen++ 04-17-2004, 06:42 AM i wana my computer to be server and another computer in remote place to be a client. after making my computer the server, how can i get its IP address to make the remote computer talk with it? i tried this but it worked only in the LAN: import java.net.*; public class LocalHostDemo { public static void main(String args[]) { System.out.println("Looking up local host"); try { InetAddress localAddress = InetAddress.getLocalHost(); System.out.println("IP address: " + localAddress.getHostAddress()); } catch(UnknownHostException uhe) { System.out.println("Error - unable to resolve localhost"); } } } i mean the client was in the same LAN, but now i wana make the client in remote place. l3vi 04-17-2004, 07:25 AM You could get your IP adress simply by going to to . If thats what your asking.. Not 2 sure what ur asking tho. Aymen++ 04-17-2004, 07:28 AM how to get it using java? l3vi 04-17-2004, 07:32 AM O, I am unfamiliar with java. However, you may find you answer here: I beleive you may find your answer around page 29. Aymen++ 04-17-2004, 08:08 AM i tried the ip address that i got it from isn't the same when i go to network properties in widows, and it was the same as another compuer in the same LAN. when i wana make a connection between two computers in a different LAN which ip should i use? Aymen++ 04-17-2004, 02:34 PM or in other words i wana make connection between two computers, one of them is the client and the other one is the server, how can i do it with java or even c++? Mhtml 04-17-2004, 03:20 PM Well you would use winsock in C++ .. assuming this is on windows. You would use gethostname() and then probably use gethostbyname() to resolve the ip... There is most likely a better way but I haven't really done much winsock stuff so that's how I'd do it. Aymen++ 04-20-2004, 06:02 AM actually the programming language is not the problem, what i wana know is how to make the connection, for example, in the network of our university there are many routers, if i wana connect to a computer outside this network, which IP should i give, and so on. black3842 04-20-2004, 08:28 AM it sounds like your question is sort of a DNS question, Correct me if I'm wrong, you have 2 computers that connect on the same network, they can resolve one another, no problem all is well, but if you put client on a remote host, it can no longer resolve name of server. This is a DNS issue of sorts. Is the IP of your computer in network neighborhood something like 192.168.x.x ....if so this means it's a private IP address. you said "i tried the ip address that i got it from isn't the same when i go to network properties in windows." This means you are probably using a either a proxy server or firewall or Network address translation of some kind....which complicates things...a lot. The IP shown on whatismyIP is probably that of your router/firewall, proxy server, or your NAT'd IP. One book I would recommend is Java Network Protocols Blackbook....it's niiiice for java network stuff, but the internet is the best resource...... search and read, search and read, rinse and repeat. Sorry i don't have a quick answer for you, but a lot depends on how your network is setup....if there's a campus firewall like most universities have, then a remote client wouldn't be able to make a connection to a computer on the internal campus network at all most likely.....but what you could try is doing it the other way around. Run the server remotely, and the client locally for testing purposes. Your client could make the outbound connection. Check out gotdns.org for free dynamic dns service if needed....to give your remote server a 'NAME' that resolves from anywhere on the internet. hope this helped, but have a feeling it just gave you more questions.....there's a lot to it....... Regards, Aymen++ 04-20-2004, 08:51 AM thanx alot black3842, i really appretiate your information, but till now i don't know how to make the connection through routers and firewalls :( , do u know any website for more information? l3vi 04-20-2004, 10:04 PM Okay, when you go to your windows command prompt, and you type up ipconfig (or something similar), you will get your network address. This may be something similar to 192.168.1.4 Usually it is only the last number that changes, depending on the router settings. Then, when you go to whatismyip.com, you will get your internet connection IP address. What happens when you get a connection in a lan, is kind of like a branch type system. First, you have your internet connection --- Then, your internet connection connects to a routere ---0 Your router will then split that connection to multiple computers ---0< Your router will give all of the new computers a different IP adress, or a LAN address, different from the Internet Connection IP adress. If your computer is behind a router/LAN, and you would like for it to be directly accessed by the internet, you must edit your forwarding/DHCP settings. For example, if you have a Linksys router, you will have to go to the configuration page (usually 192.168.1.1), and on the top menu, click the advanced tab, then click the Forwarding Tab. On this page, you would enter the port and name that you would like your computer to be accessible by (Internet Port:80), then you also have to enter in your computers lan address. (The address you get from the microsoft command prompt: ipconfig) Does this help? lostinjava 05-20-2004, 05:25 PM if u aint got the link already then u are gonna love me for givin it to you. this will give u all the standard classes and the construstors and methods in java. i know for a fact that there is a getIp() (not sure if that is what it is called in java) but it WILL be there somewhere!! Another way is to think Cookies!!! lots n lots n lots of ways of gettin IP address. ..Come find me in the Java Jungle... EZ Archive Ads Plugin for vBulletin Computer Help Forum vBulletin® v3.8.2, Copyright ©2000-2013, Jelsoft Enterprises Ltd.
http://www.codingforums.com/archive/index.php/t-37040.html
CC-MAIN-2013-48
refinedweb
1,137
71.75
On 2003-05-07 18:32:34 -0500 Catherine Luedecker <address@hidden> wrote: > Hello everybody, > I am new to the list and new to GNUstep. Not totally sure about things yet. Welcome. > is it necessary to compile and run the test in the Pantomime source files? I > get an error every time I try to "make" it. I am compiling on a linux system > using Pantomime-1.1.0pre1 There's also a new Pantomime-1.1.0pre2 (look around). What versions are you using in your GNUStep installation? I have Linux RH (6.2, 2.2.26-6.2.3smp), WindowMaker 0.8.1, GNUstep make 1.6.0, base 1.6.0, gui 0.8.5 and back 0.8.5. It looks below that you are not using the gnustep-make package, or you have not set your environment variables by running GNUstep.sh, which is part of the package. Other problems you'll find compiling: - Comment out the #ifdef and #endif on lines 31 and 33 of GSCategories.h > Making all for app Pantomime... > Compiling file Pantomime.m ... > Pantomime.m:38:35: warning: Pantomime/MimeBodyPart.h: No such file or > directory > Pantomime.m: In function `main': > Pantomime.m:126: `MimeBodyPart' undeclared (first use in this function) > Pantomime.m:126: (Each undeclared identifier is reported only once > Pantomime.m:126: for each function it appears in.) > Pantomime.m:126: `aMimeBodyPart' undeclared (first use in this function) > make[1]: *** [shared_obj/ix86/linux-gnu/gnu-gnu-gnu/Pantomime.o] Error 1 > make: *** [Pantomime.all.app.variables] Error 2 > > I have been trying to compile GNUMail without success and I wonder if this > has anything to do with it. You'll need to set up your GNUstep enviroment, for that it's recommended to read this guide: >From there both Pantomime and GNUMail should compile with just the above >change to the header file. Regards. -- René Berber
http://lists.gnu.org/archive/html/discuss-gnustep/2003-05/msg00049.html
CC-MAIN-2014-35
refinedweb
317
71
Cross-Language Remoting with mod_perlservice Mod_perlservice? What is That? Mod_perlservice is a cool, new way to do remoting – sharing data between server and client processes – with Perl and Apache. Let’s start by breaking that crazy name apart: mod + perl + service. Mod means that it’s a module for the popular and ubiquitous Apache HTTP Server. Perl represents the popular and ubiquitous programming language. Service is the unique part. It’s the new ingredient that unifies Apache, Perl, and XML into an easy-to-use web services system. With mod_perlservice, you can write Perl subs and packages on your server and call them over the internet from client code. Clients can pass scalars, arrays, and hashes to the server-side subroutines and obtain the return value (scalar, array, or hash) back from the remote code. Some folks refer to this functionality as “remoting” or “RPC,” so if you like you can say mod_perlservice is remoting with Perl and Apache. You can write client programs in a variety of languages; libraries for C, Perl, and Flash Action Script are all ready to go. Now that you know what mod_perlservice is, let’s look at why it is. I believe that mod_perlservice has a very clean, easy-to-use interface when compared with other RPC systems. Also, because it builds on the Apache platform it benefits from Apache’s ubiquity, security, and status as a standard. Mod_perlservice sports an embedded Perl interpreter to offer high performance for demanding applications. How Can I Use mod_perlservice? Mod_perlservice helps create networked applications that require client-server communication, information processing, and sharing. Mod_perlservice is for applications, not for creating dynamic content for your HTML pages. However, you surely can use it for Flash remoting with Perl. Here are some usage examples: - A desktop application (written using your favorite C++ GUI library) that records the current local air temperature and sends it to an online database every 10 minutes. Any client can query the server to obtain the current and historical local air temperature of any other participating client. - A Flash-based stock portfolio management system. You can create model stock portfolios and retrieve real-time stock quote information and news. - A command-line utility in Perl that accepts English sentences on standard input and outputs the sentences in French. Translation occurs in server-side Perl code. If the sentence is idiomatic and the translation is incorrect, the user has the option of sending the server a correct translation to store in an online idiom database. How Do I Start? Let’s move on to the fun stuff and set up a working installation. Before we begin, make sure you have everything you need! You need Apache HTTPD, Perl, Expat, mod_perlservice, and a mod_perlservice client library (Perl Client | C Client | Flash Client). You must download a client library separately, as the distribution does not include any clients! In your build directory: myhost$ tar -xvzf mod_perlservice.tar.gz myhost$ cd mod_perlservice myhost$ ./configure myhost$ make myhost$ make install If everything goes to plan, you’ll end up with a fresh mod_perlservice.so in your Apache modules directory, (usually /etc/apache/modules). Now it’s time to configure Apache to use mod_perlservice. cd into your Apache configuration directory (usually /etc/apache/conf) Add the following lines to the file apache.conf (or httpd.conf, if you have only a single configuration file): LoadModule perlservice_module modules/mod_perlservice.so AddModule mod_perlservice.c Add the following lines to commonapache.conf, if you have it and httpd.conf if you don’t: <IfModule mod_perlservice.c> <Location /perlservice> SetHandler mod_perlservice Allow From All PerlApp myappname /my/app/dir #Examples PerlApp stockmarket /home/services/stockmarket PerlApp temperature /home/services/temperature </Location> </IfModule> Pay close attention to the PerlApp directive. For every mod_perlservice application you want to run, you need a PerlApp directive. If I were creating a stock market application, I might create a directory: /home/services/stockmarket and add the following PerlApp directive: PerlApp stockmarket /home/services/stockmarket This tells mod_perlservice to host an application called stockmarket with the Perl code files located in the /home/services/stockmarket directory. You may run as many service applications as you wish and you may organize them however you wish. With the configuration files updated, the next step is to restart Apache: myhost$ /etc/init.d/apache restart or myhost$ apachectl restart Now if everything went as planned, mod_perlservice should be installed. Congratulations! An Example Let’s create that stock portfolio example mentioned earlier. It won’t support real-time quotes, but will instead create a static database of common stock names and historical prices. The application will support stock information for General Electric (GE), Red Hat (RHAT), Coca-Cola (KO), and Caterpillar (CAT). The application will be stockmarket and will keep all of the Perl files in the stock market application directory (/home/services/stockmarket). The first file will be quotes.pm, reading as follows: our $lookups = { "General Electric" => "GE", "Red Hat" => "RHAT", "Coca Cola" => "KO", "Caterpillar Inc" => "CAT" }; our $stocksymbols = { "GE" => { "Price" => 33.91, "EarningsPerShare" => 1.544 }, "RHAT" => { "Price" => 14.96, "EarningsPerShare" => 0.129 }, "KO" => { "Price" => 42.84, "EarningsPerShare" => 1.984 }, "CAT" => { "Price" => 75.74, "EarningsPerShare" => 4.306 } }; package quotes; sub lookupSymbol { my $companyname = shift; return $lookups->{$company_name}; } sub getLookupTable { return $lookups; } sub getStockPrice { my $stocksymbol = shift; return $stocksymbols->{$stocksymbol}->{"Price"}; } sub getAllStockInfo { my $stocksymbol = shift; return $stocksymbols{$stocksymbol}; } 1; That’s the example of the server-side program. Basically, two static “databases” ( $lookups and $stocksymbols) provide information about a limited universe of stocks. The above methods query the static databases; the behavior should be fairly self-explanatory. You may have as many .pm files in your application as you wish and you may also define as many packages within a .pm file as you wish. An extension to this application might be a file called news.pm that enables you to fetch current and historical news about your favorite stocks. Now let’s talk some security. As it stands, this code won’t work; mod_perlservice will restrict access to any file and method you don’t explicitly export for public use. Use the .serviceaccess file to export things. Create this file in each application directory you declare with mod_perlservice or you’ll have no access. An example file might read: <ServiceAccess> <AllowFile name="quotes.pm"> Allow quotes::* </AllowFile> </ServiceAccess> In the stock market example, this file should be /home/services/stockmarket/.serviceaccess. Be sure that the apache user does not own this file; that could be bad for security. This file allows access to the file quotes.pm and allows public access to all ( *) the methods in package quotes. If I want to restrict access only to getStockPrice, I would have written Allow quotes::getStockPrice. After that, I could add access to lookupSymbol with Allow quotes::lookupSymbol. To make quotes.pm public carte blanche, use Allow *. You won’t need to restart Apache when you make changes to this file as it reloads automatically. Client Code Well, so far I’ve only shown you half the story. It’s time to create some client-side code. This client example uses the Flash “PerlService” library, just one of the client-side interfaces to mod_perlservice. The Flash client works well for browser interfaces while the Perl and C clients can create command-line or GUI (ie, GTK or Qt) applications. This article is on the web, so we’ll give the Flash interface a spin and then go through an example in Perl. The first code smidgen should go in the first root frame of your Flash application. It instantiates the global PerlService object and creates event handlers for when remote method calls return from the server. The event handlers output the requested stock information to the display box. #include "PerlService-0.0.2.as" // Create a global PerlService object // Tell the PerlService object about the remote code we want to use: // arg1) host: // arg2) application: stockmarket // arg3) file: quotes.pm // arg4) package: quotes _global.ps = new PerlService("","stockmarket","quotes.pm","quotes"); // First declare three callback functions to handle return values function onStockPrice(val) { output.text = "StockPrice: " + symbolInput.text + " " + val + "\n" + output.text; } function onAllStockInfo(val) { output.text = "Stock Info: " + allInfoInput.text + "\n" + "\tPrice: " + val.Price + "\n" + "\tEarnings Per Share: " + val.EarningsPerShare + "\n" + output.text; } function onLookupSymbol(val) { output.text = "Lookup Result: " + symbolInput.text + " " + val + "\n" + output.text; } // Register callback handlers for managing return values from remote methods // ie, onStockPrice receives the return value from remote method getStockPrice ps.registerReplyHandler( "getStockPrice", onStockPrice ); ps.registerReplyHandler( "getAllStockInfo", onAllStockInfo ); ps.registerReplyHandler( "lookupSymbol", onLookupSymbol ); Now for the code that makes things happen. The following code attaches to three separate buttons. When clicked, the buttons call the remote Perl methods using the global PerlService object. Flash Action Script is an event-driven system, so click event-handlers will call the remote code and return event-handlers will do something with those values. Figure 1. Button and code associations. When a user presses Button 1, call the remote method getStockPrice and pass the text in the first input box as an argument. on (release) { ps.getStockPrice(box1.text); } When the user presses Button 2, call the remote method getAllStockInfo and pass the text in the second input box as an argument. on (release) { ps.getAllStockInfo(box2.text); } When the user presses Button 3, call the remote method lookupSymbol and pass the text in the third input box as an argument. on (release) { ps.lookupSymbol(box3.text); } That’s the entire Flash example. Here is the finished product. Perl Client Not everyone uses Flash, especially in the Free Software community. The great thing about mod_perlservice is that everyone can join the party. Here’s a Perl Client that uses the same server-side stock market API. use PService; my $hostname = ""; my $appname = "stockmarket"; my $filename = "quotes.pm"; my $package = "quotes"; #Create the client object with following arguments: #1) The host you want to use #2) The application on the host #3) The perl module file name #4) The package you want to use my $ps = PSClient->new( $hostname, $appname, $filename, $package ); # Just call those remote methods and get the return value my $price = $ps->getStockPrice("GE"); my $info = $ps->getAllStockInfo("RHAT"); my $lookup = $ps->lookupSymbol("Coca Cola"); #Share your exciting new information with standard output print "GE Price: " . $price . "\n"; print "Red Hat Price: " . $info->{Price} . "\n"; print "Red Hat EPS: " . $info->{EarningsPerShare} . "\n"; print "Coca-Cola's ticker symbol is " . $lookup . "\n"; Using the PSClient object to call remote methods might feel a little awkward if you expect to call them via quotes::getStockPrice(), but think of the $ps instance as a proxy class to your remote methods, if you like. If things don’t work, use print $ps->get_errmsg(); to print an error message. $ps->get_errmsg(); That’s a local reserved function, so it doesn’t call the server. It’s one of a few reserved functions detailed in the Perl client reference. As you can see, it requires much less work to create an example with the Perl client. You simply instantiate the PSClient object, call the remote methods, and do something with the return values. That’s it. There is no protocol decoding, dealing with HTTP, CGI arguments, or any of the old annoyances. Your remote code may as well be local code. Thanks for Taking the Tour That’s mod_perlservice. I’m sure many of you who are developing client-server applications can see the advantages of this system. Personally, I’ve always found the existing technologies to be inflexible and/or too cumbersome. The mod_perlservice system offers a clean, simple, and scalable interface that unites client-side and server-side code in the most sensible way yet. What’s next? mod_parrotservice!
https://www.perl.com/pub/2004/11/18/mod_perlservice.html/
CC-MAIN-2018-22
refinedweb
1,954
58.79
Sys::RunAlways - make sure there is always one invocation of a script active use Sys::RunAlways; # code of which there must always be on instance running on system use Sys::RunAlways silent => 1; # don't tell the world we're starting # code of which there must always be on instance running on system Provide a simple way to make sure the script from which this module is loaded, is always running on the server. This documentation describes version 0.05. There are no methods. The functionality of this module depends on the availability of the DATA handle in the script from which this module is called (more specifically: in the "main" namespace). At INIT time, it is checked whethere there is a DATA handle: if not, it exits with an error message on STDERR and an exit value of 2. If the DATA handle is available, and it cannot be flocked, it exits silently with an exit value of 0. If there is a DATA handle, and it could be flocked, a message is put on STDERR and execution continues without any further interference. Optionally, the message on STDERR can be prevented by specifying the "silent" parameter in the use statement with a true value, like: use Sys::RunAlways silent => 1; Fcntl (any) Execution of scripts that are (sym)linked to another script, will all be seen as execution of the same script, even though the error message will only show the specified script name. This could be considered a bug or a feature. If you change the script while it is running, the script will effectively lose its lock on the file. Causing any subsequent run of the same script to be successful, causing two instances of the same script to run at the same time (which is what you wanted to prevent by using Sys::RunAlone in the first place). Therefore, make sure that no instances of the script are running (and won't be started by cronjobs while making changes) if you really want to be 100% sure that only one instance of the script is running at the same time. Inspired by Randal Schwartz's mention of using the DATA handle as a semaphore on the London PM mailing list. Elizabeth Mattijsen Copyright (c) 2005, 2006, 2012 Elizabeth Mattijsen <liz@dijkmat.nl>. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/dist/Sys-RunAlways/lib/Sys/RunAlways.pm
CC-MAIN-2017-04
refinedweb
409
63.32
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. infinity looping on wizard creation i made a wizard that has one2many field and many2one field that target self when I save the form, the create function looped. If I cancel the create function, the form won't close after Pressing the save button. what should i do? _columns={ 'lines': fields.one2many('template', 'parent_id', 'Lines'), 'parent_id': fields.many2one('template', 'Parent', ondelete='cascade', select=True), } def save(self, cr, uid, data, context=None): return { 'type': 'ir.actions.act_window_close', 'tag': 'reload', } def create(self, cr, uid, vals, context=None): return 1 About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/infinity-looping-on-wizard-creation-94489
CC-MAIN-2018-05
refinedweb
144
58.79
Difference between revisions of "Macro 3d Printer Slicer Individual Parts" Latest revision as of 10:43, 6 November 2020 Description This code, when run, will export the visible bodies at the top level (bodies deeper in the tree will be ignored) of the currently open design to individual STL files, and open them it in the slicing software that you use. This macro will look for Cura as the default but you can change it to any other slider by changing the SLICERAPP variable in the source code. It is best used by creating a link to the macro on the toolbar, and when your ready to slice the object, just click it and your objects, as they appear on the screen in FreeCAD will appear on your slicing software's interface, ready to slice. It will also create several STL files with the same filename as the design file and the part label in the same directory as the design file. Script The SLICERAPP variable can be changed to any slicing software of your choosing. If a specific object is not exported you might have to add the respective type to the doexport array. Macro_3d_Printer_Slicer_Individual_Parts.py import FreeCAD import Mesh import sys import math import os import subprocess SLICERAPP= "cura" # Put your Slicer program here #(".FCStd", "--") visible_objs = [] # Get Objects in document doc = App.ActiveDocument objs = doc.Objects stlFile = "" stlFiles = [ SLICERAPP ] # hide all for obj in objs: print(obj.Label + "//" + obj.TypeId) print(len(obj.InList)) if obj.ViewObject.isVisible() and hasattr(obj, 'Shape') and (len(obj.InList) <= 1): visible_objs.append(obj) for obj in visible_objs: stlFile = OutDir+str(obj.Label)+".stl" Mesh.export([obj],stlFile) stlFiles.append(stlFile) print ("Exporting " + stlFile + "\n") print ("Calling subprocess: " + str(stlFiles)+"\n") subprocess.Popen(stlFiles) Credits Thanks to cae2100 for developing the original macro code - also available here. Thanks to Wmayer for his help in writing this script. Original forum topic:
https://wiki.freecadweb.org/index.php?title=Macro_3d_Printer_Slicer_Individual_Parts&diff=791057&oldid=790931
CC-MAIN-2022-27
refinedweb
318
55.34
hi Robert, thank you for your replies. I couldn't find much documentation/examples of this, but this is what I came up with (below). is that the way I'm supposed to use the MappingCharFilter? also, if that is the correct way, wouldn't it make sense to return a reference to "this" from NormalizeCharMap.Builder.add() so that we can chain the calls to add() like so: builder.add( ",", ", " ).add( ";", "; " ).build() ? thanks, Igal public class CommaSpaceCharFilter extends MappingCharFilter { public CommaSpaceCharFilter( Reader input ) { super( getMap(), input ); } final static NormalizeCharMap getMap() { NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder(); builder.add( ",", ", " ); builder.add( ";", "; " ); NormalizeCharMap ncm = builder.build(); return ncm; } } On 11/3/2012 5:13 PM, Robert Muir wrote: > On Sat, Nov 3, 2012 at 7:47 PM, Igal @ getRailo.org <igal@getrailo.org> wrote: >> I considered it, and it's definitely an option. >> >> but I read in the book "Lucene In Action" that MappingCharFilter is >> inefficient and I'm not sure that I need that. if implementing my own >> involves a lot of coding then I might resort to it as I don't have large >> data sets to index at this time. > Also I think (dont remember off the top of my head) that this note in > Lucene in Action refers to the fact that its base class > (BaseCharFilter) corrected offsets in O(n) at the time. > > We fixed this to be O(log(N)) here as of 3.1: > > > So I think its worth giving it a try before trying to code something
http://mail-archives.apache.org/mod_mbox/lucene-java-user/201211.mbox/%3C5095B7A4.5080603@getrailo.org%3E
CC-MAIN-2019-26
refinedweb
254
72.97
Closed Bug 1433636 Opened 2 years ago Closed 2 years ago Crash in OOM | large | NS _ABORT _OOM | mozilla::safebrowsing::`anonymous namespace'::Read Value<T> Categories (Toolkit :: Safe Browsing, defect, P1, critical) Tracking () mozilla60 People (Reporter: philipp, Assigned: francois) Details (Keywords: crash, regression) Crash Data Attachments (2 files) This bug was filed from the Socorro interface and is report bp-30e684b7-fc2d-404c-9db7-3bad40180126. ============================================================= Top 10 frames of crashing thread: 0 xul.dll NS_ABORT_OOM xpcom/base/nsDebugImpl.cpp:620 1 xul.dll mozilla::safebrowsing::`anonymous namespace'::ReadValue<nsTSubstring<char> > toolkit/components/url-classifier/LookupCacheV4.cpp:472 2 xul.dll mozilla::safebrowsing::LookupCacheV4::LoadMetadata toolkit/components/url-classifier/LookupCacheV4.cpp:544 3 xul.dll mozilla::safebrowsing::LookupCacheV4::LoadFromFile toolkit/components/url-classifier/LookupCacheV4.cpp:169 4 xul.dll mozilla::safebrowsing::LookupCache::LoadPrefixSet toolkit/components/url-classifier/LookupCache.cpp:478 5 xul.dll mozilla::safebrowsing::LookupCache::Open toolkit/components/url-classifier/LookupCache.cpp:83 6 xul.dll mozilla::safebrowsing::Classifier::GetLookupCache toolkit/components/url-classifier/Classifier.cpp:1467 7 xul.dll mozilla::safebrowsing::Classifier::RegenActiveTables toolkit/components/url-classifier/Classifier.cpp:927 8 xul.dll mozilla::safebrowsing::Classifier::Open toolkit/components/url-classifier/Classifier.cpp:262 9 xul.dll nsUrlClassifierDBServiceWorker::OpenDb toolkit/components/url-classifier/nsUrlClassifierDBService.cpp:974 ============================================================= out of memory crashes at startup with this signature show up since firefox 58 on 32-bit and 64-bit builds on windows and they show particularly large memory allocation sizes. over all the crash volume is quite low though... Assignee: nobody → francois Status: NEW → ASSIGNED Priority: -- → P1 To test my patch, I followed these steps: 1. Create a new browser profile and start Firefox. 2. Trigger a google4 update using about:url-classifier. 3. Try the phishing and malware test page on. 4. Close Firefox. 5. Go into the cache directory and edit safebrowsing/google4/goog-phish-proto.metadata. 6. Append the string "AAAA" at the start of the file and save. 7. Start Firefox again. 8. Clear the browser cache. 9. Try the phishing and malware test page on. 10. Trigger a google4 update using about:url-classifier. 11. Clear the browser cache. 12. Try the phishing and malware test page on. Results (as expected): - Test pages are correctly blocked at steps 3 and 12. - Test pages are not blocked at step 10. In other words, loading a single failed table from disk will cause the whole google4 database to be blown away. The next update will start from a blank state and work fine. I also confirmed that without my patch, the same steps would crash the browser at startup. Comment on attachment 8947322 [details] Bug 1433636 - Put a limit on the length of Safe Browsing metadata values. If the real limit is 32, why not enforce that instead? If you read 33 now, you know the file is corrupted? Attachment #8947322 - Flags: review?(gpascutto) → review+ (In reply to Gian-Carlo Pascutto [:gcp] from comment #3) > If the real limit is 32, why not enforce that instead? If you read 33 now, > you know the file is corrupted? That's not actually a limit, but rather what the value currently is. It's the opaque "state" string and its size is not specified. I figured I'd use something much larger in case Google decides to use longer strings in the future. I added a new telemetry probe to keep an eye on how often this condition occurs and to detect any unusual increases in this kind of failure. Attachment #8947979 - Flags: review?(rrayborn) Comment on attachment 8947979 [details] data review request 1) Is there or will there be **documentation** that describes the schema for the ultimate data set available publicly, complete and accurate? (see [here](), [here](), and [here]() for examples). Refer to the appendix for "documentation" if more detail about documentation standards is needed. Yes, in Telemetry declarations in code 2) Is there a control mechanism that allows the user to turn the data collection on and off? (Note, for data collection not needed for security purposes, Mozilla provides such a control mechanism) Provide details as to the control mechanism available. Yes, Telemetry 3) If the request is for permanent data collection, is there someone who will monitor the data over time?** Yes. This is needed to note when our max length assumption fails. It appears that may happen in the result of regular corruption or if the 3rd party data source changes.**? No 7) Is the data collection covered by the existing Firefox privacy notice? Yes 8) Does there need to be a check-in in the future to determine whether to renew the data? No Attachment #8947979 - Flags: review?(rrayborn) → review+ Pushed by fmarier@mozilla.com: Put a limit on the length of Safe Browsing metadata values. r=gcp Status: ASSIGNED → RESOLVED Closed: 2 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla60 Please request Beta approval on this when you get a chance. Flags: needinfo?(francois) Comment on attachment 8947322 [details] Bug 1433636 - Put a limit on the length of Safe Browsing metadata values. Approval Request Comment [Feature/Bug causing the regression]: Bug 1433636 [User impact if declined]: If the Safe Browsing files get corrupted on disk, the browser can crash at startup. [Is this code covered by automated tests?]: No [Has the fix been verified in Nightly?]: Yes, manually by me. [Needs manual test from QE? If yes, steps to reproduce]: Probably not needed but instructions are at [List of other uplifts needed for the feature/fix]: None [Is the change risky?]: No [Why is the change risky/not risky?]: It's only really adding a single early return when we read bogus value from disk. [String changes made/needed]: None Flags: needinfo?(francois) Attachment #8947322 - Flags: approval-mozilla-beta? Comment on attachment 8947322 [details] Bug 1433636 - Put a limit on the length of Safe Browsing metadata values. Not a high volume crash but it's nice that we can prevent it. Let's uplift this for 59 beta 12. Attachment #8947322 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
https://bugzilla.mozilla.org/show_bug.cgi?id=1433636
CC-MAIN-2019-43
refinedweb
1,002
52.05
This tutorial explains everything you need to know about matching groups in Python’s re package for regular expressions. You may have also read the term “capture groups” which points to the same concept. As you read through the tutorial, you can also watch the tutorial video where I explain everything in a simple way:. So let’s start with the basics: Matching Group () What’s a matching group? Like you use parentheses to structure mathematical expressions, (2 + 2) * 2 versus 2 + (2 * 2), you use parentheses to structure regular expressions. An example regex that does this is 'a(b|c)'. The whole content enclosed in the opening and closing parentheses is called matching group (or capture group). You can have multiple matching groups in a single regex. And you can even have hierarchical matching groups, for example 'a(b|(cd))'. One big advantage of a matching group is that it captures the matched substring. You can retrieve it in other parts of the regular expression—or after analyzing the result of the whole regex matching. Let’s have a short example for the most basic use of a matching group—to structure the regex. Say you create regex b?(a.)* with the matching group (a.) that matches all patterns starting with zero or one occurrence of character 'b' and an arbitrary number of two-character-sequences starting with the character 'a'. Hence, the strings 'bacacaca', 'aaaa', '' (the empty string), and 'Xababababab' all match your regex. The use of the parentheses for structuring the regular expression is intuitive and should come naturally to you because the same rules apply as for arithmetic operations. However, there’s a more advanced use of regex groups: retrieval. You can retrieve the matched content of each matching group. So the next question naturally arises: How to Get the First Matching Group? There are two scenarios when you want to access the content of your matching groups: - Access the matching group in the regex pattern to reuse partially matched text from one group somewhere else. - Access the matching group after the whole match operation to analyze the matched text in your Python code. In the first case, you simply get the first matching group with the \number special sequence. For example, to get the first matching group, you’d use the \1 special sequence. Here’s an example: >>> import re >>> re.search(r'(j.n) is \1','jon is jon') <re.Match object; span=(0, 10), You’ll use this feature a lot because it gives you much more expression power: for example, you can search for a name in a text-based on a given pattern and then process specifically this name in the rest of the text (and not all other names that would also fit the pattern). Note that the numbering of the groups start with \1 and not with \0—a rare exception to the rule that in programming, all numbering starts with 0. In the second case, you want to know the contents of the first group after the whole match. How do you do that? The answer is also simple: use the m.group(0) method on the matching object m. Here’s an example: >>> import re >>> m = re.search(r'(j.n)','jon is jon') >>> m.group(1) 'jon' The numbering works consistently with the previously introduced regex group numbering: start with identifier 1 to access the contents of the first group. How to Get All Other Matching Groups? Again, there are two different intentions when asking this question: - Access the matching group in the regex pattern to reuse partially matched text from one group somewhere else. - Access the matching group after the whole match operation to analyze the matched text in your Python code. In the first case, you use the special sequence \2 to access the second matching group, \3 to access the third matching group, and \99 to access the ninety-ninth matching group. Here’s an example: >>> import re >>> re.search(r'(j..) (j..)\s+\2', 'jon jim jim') <re.Match object; span=(0, 11), >>> re.search(r'(j..) (j..)\s+\2', 'jon jim jon') >>> As you can see, the special sequence \2 refers to the matching contents of the second group 'jim'. In the second case, you can simply increase the identifier too to access the other matching groups in your Python code: >>> import re >>> m = re.search(r'(j..) (j..)\s+\2', 'jon jim jim') >>> m.group(0) 'jon jim jim' >>> m.group(1) 'jon' >>> m.group(2) 'jim' This code also shows an interesting feature: if you use the identifier 0 as an argument to the m.group(0) method, the regex module will give you the contents of the whole match. You can think of it as the first group being the whole match. Named Groups: (?P<name>…) and (?P=name) Accessing the captured group using the notation \number is not always convenient and sometimes not even possible (for example if you have more than 99 groups in your regex). A major disadvantage of regular expressions is that they tend to be hard to read. It’s therefore important to know about the different tweaks to improve readability. One such optimization is a named group. It’s really just that: a matching group that captures part of the match but with one twist: it has a name. Now, you can use this name to access the captured group at a later point in your regular expression pattern. This can improve readability of the regular expression. import re The code searches for substrings that are enclosed in either single or double quotes. You first match the opening quote by using the regex ["\']. You escape the single quote, \' so that the Python regex engine does not assume (wrongly) that the single quote indicates the end of the string. You then use the same group to match the closing quote of the same character (either a single or double quote). Non-Capturing Groups (?:…) In the previous examples, you’ve seen how to match and capture groups with the parentheses (...). You’ve learned that each match of this basic group operator is captured so that you can retrieve it later in the regex with the special commands \1, \2, …, \99 or after the match on the matched object m with the method m.group(1), m.group(2), and so on. But what if you don’t need that? What if you just need to keep your regex pattern in order—but you don’t want to capture the contents of a matching group? The simple solution is the non-capturing group operation (?: ... ). You can use it just like the capturing group operation ( ... ). Here’s an example: >>>import re >>> re.search('(?:python|java) is great', 'python is great. java is great.') <re.Match object; span=(0, 15), The non-capturing group exists with the sole purpose to structure the regex. You cannot use its content later: >>> m = re.search('(?:python|java) is great', 'python is great. java is great.') >>> m.group(1) Traceback (most recent call last): File "<pyshell#28>", line 1, in <module> m.group(1) IndexError: no such group >>> If you try to access the contents of the non-capturing group, the regex engine will throw an IndexError: no such group. Of course, there’s a straightforward alternative to non-capturing groups. You can simply use the normal (capturing) group but don’t access its contents. Only rarely will the performance penalty of capturing a group that’s not needed have any meaningful impact on your overall application.. Think of the lookahead assertion as a non-consuming pattern match. The regex engine goes from the left to the right—searching for the pattern. At each point, it has one “current” position to check if this position is the first position of the remaining match. In other words, the regex engine tries to “consume” the next character as a (partial) match of the pattern. The advantage of the lookahead expression is that it doesn’t consume anything. It just “looks ahead” starting from the current position whether what follows would theoretically match the lookahead pattern. If it doesn’t, the regex engine cannot move? If both patterns appear anywhere in the string, the whole string should be returned as a match. Now, this is a bit more complicated because any regular expression pattern is ordered from left to right. A simple solution is to use the lookahead assertion (?.*A) to check whether regex A appears anywhere in the string. (Note we assume a single line string as the .* pattern doesn’t match the newline character by default.) Let’s first have a look at the minimal solution to check for two patterns anywhere in the string (say, patterns in the same position at the beginning of the string. So, you can repeat the same for the word you. Note that this method doesn’t care about the order of the two words: >>> import re >>>>> re.findall(pattern, 'hi how are you?') [''] >>> re.findall(pattern, 'you are how? hi!') [''] No matter which word “hi” or “you” appears first in the text, the regex engine finds both. You may ask: why’s the output the empty string? The reason is that the regex engine hasn’t consumed any character. It just checked the lookaheads. So the easy fix is to consume all characters as follows: >>> import re >>>>> re.findall(pattern, 'you fly high') ['you fly high'] Now, the whole string is a match because after checking the lookahead with'. Group Flags (?aiLmsux:…) and (?aiLmsux) You can control the regex engine with the flags argument of the re.findall(), re.search(), or re.match() methods. For example, if you don’t care about capitalization of your matched substring, you can pass the re.IGNORECASE flag to the regex methods: >>> re.findall('PYTHON', 'python is great', flags=re.IGNORECASE) ['python'] But using a global flag for the whole regex is not always optimal. What if you want to ignore the capitalization only for a certain subregex? You can do this with the group flags: a, i, L, m, s, u, and x. Each group flag has its own meaning: For example, if you want to switch off the differentiation of capitalization, you’ll use the i flag as follows: >>> re.findall('(?i:PYTHON)', 'python is great') ['python'] You can also switch off the capitalization for the whole regex with the “global group flag” (?i) as follows: >>> re.findall('(?i)PYTHON', 'python is great') ['python'] Where to Go From Here? Summary: You’ve learned about matching groups to structure the regex and capture parts of the matching result. You can then retrieve the captured groups with the \number syntax within the regex pattern itself and with the m.group(i) syntax in the Python code at a later stage. To learn the Python basics, check out my free Python email academy with many advanced courses—including a regex video tutorial in your INBOX. Join 20,000+ ambitious coders for free!
https://blog.finxter.com/python-re-groups/
CC-MAIN-2020-40
refinedweb
1,842
74.08
November 17, 2021 I wanted to create sites in my Netlify account programmatically using Netlify’s API, but that were connected to a GitHub repository (so when I pushed a change to the repository, my site would automatically rebuild as well). Using the Node.js client for the Netlify API, the request looks like this. I’ll explain how to get the parameters in detail afterwards. import NetlifyAPI from "netlify"; const client = new NetlifyAPI(process.env.NETLIFY_TOKEN); const site = await netlify.createSite({ body: { subdomain: "unique-subdomain-for-this-site", repo: { provider: "github", id: 123456789, repo: "username/repository", private: true, branch: "main", installation_id: 123456, }, }, }); This is a pretty standard createSite call to Netlify’s API, but it can be a bit tricky to figure out which parameters you need under repo. provider: since we’re linking with a GitHub repo, this value will be github. idrefers to the ID of the GitHub repository. There are multiple ways to get this: octolytics-dimension-repository_idmeta tag. repois the name of the repository you want to deploy — for example, benborgers/cool-site. privateis a boolean that indicates whether your GitHub repository is public or private. branchindicates which branch of the GitHub repository you want this site to stay up-to-date with (usually masteror main). installation_id: this requires you to have the Netlify GitHub app authorized on the GitHub account that owns the repository. If you go to github.com/settings/installations and don’t see Netlify on that list, deploy a site manually through the Netlify dashboard from this GitHub account first and give Netlify access to your GitHub account through their GitHub integration. Once Netlify is on that list of applications, click “Configure” next to it. The installation_idis the number at the end of the URL. A quick favor: was anything I wrote incorrect or misspelled, or do you still have questions? Please use this form to let me know or ask for help!
https://benborgers.com/posts/netlify-api-github
CC-MAIN-2021-49
refinedweb
322
56.35