text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
This topic provides an overview of generics in the .NET Framework and a summary of generic types or methods. It also defines the terminology used to discuss generics..
Public Class Generic(Of T)
Public Field As T
End Class
public class Generic<T>
{
public T Field;
}
generic<typename T> public ref class Generic
{
public:
T Field;
};.
Dim g As New Generic(Of String)
g.Field = "A string"
Generic<string> g = new Generic<string>();
g.Field = "A string";
Generic<String^>^ g = gcnew Generic<String^>();
g->Field = "A string";
The following terms are used to discuss generics in the .NET Framework:
A generic type definition is a class, structure, or interface declaration that functions as a template, with placeholders for the types that it can contain or use. For example, the Dictionary<(Of <<(Of <.
Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the IComparer<(Of <.
Function Generic(Of T)(ByVal arg As T) As T
Dim temp As T = arg
...
End Function
T Generic<T>(T arg) { T temp = arg; ...}
generic<typename T> T Generic(T arg) { T temp = arg; ...};.. | http://msdn.microsoft.com/en-us/library/ms172193.aspx | crawl-002 | refinedweb | 190 | 67.55 |
pyscrollpyscroll
For Python 2.7 & 3.3+ and Pygame 1.9
A simple & fast module for animated scrolling maps for your new or existing game.
IntroductionIntroduction
pyscroll is a generic module for making a fast scrolling image with PyGame. It uses a lot of magic to get great framerates out of PyGame. It only exists to draw a map. It doesn't load images or data, so you can use your own custom data structures, tile storage, ect.
pyscroll is compatible with pytmx (), so you can use your Tiled maps. It also has out-of-the-box support for PyGame Sprites.
The included class, BufferedRenderer, gives great framerates, supports layered rendering and can draw itself. It supports fast layered tile rendering with alpha channel support. It also includes animated tile rendering and zooming!
Use It Like a CameraUse It Like a Camera
In order to further simplify using scrolling maps, pyscroll includes a pygame Sprite Group that will render all sprites on the map and will correctly draw them over or under tiles. Sprites can use their Rect in world coordinates, and the Group will work like a camera, translating world coordinates to screen coordinates.
Zooming is a new feature and should operate quickly on most computers. Be aware that it is cheap to operate a zoomed view, but expensive to do the actual zooming. This means that its easy to zoom the map once, but don't expect it to work quickly if you want to do an animated zoom into something.
Its useful to make minimaps or create simple chunky graphics.
FeaturesFeatures
- Fast framerate
- Speed is not affected by map size
- Sprites or plain surfaces can be drawn in layers
- Animated tiles
- Zoom in and out
- Includes optional drop-in replacement for pygame LayeredGroup
- Pixel alpha and colorkey tilesets are supported
- Drawing and scrolling shapes
- Fast and small footprint
InstallationInstallation
Install from pip
pip install pyscroll
You can also manually install it
python setup.py install
New Game TutorialNew Game Tutorial
This is a quick guide on building a new game with pyscroll and pygame. It uses the PyscrollGroup for efficient rendering. You are free to use any other pygame techniques and functions.
Open quest.py in the tutorial folder for a gentle introduction to pyscroll and the PyscrollGroup for PyGame. There are plenty of comments to get you started.
The Quest demo shows how you can use a pyscroll group for drawing, how to load maps with PyTMX, and how pyscroll can quickly render layers. Moving under some tiles will cause the Hero to be covered.
The repo wiki has more in-depth explanations of the tutorial code, including one way to implement sprite animation. Be sure to check it out. Anyone is welcome to make additions or improvements.
Example Use with pytmxExample Use with pytmx
pyscroll and pytmx can load your maps from Tiled and use your PyGame Sprites. The following is a very basic way to load a map onto the screen.
import pyscroll from pytmx.util_pygame import load_pygame # Load TMX data tmx_data = load_pygame("desert.tmx") # Make data source for the map map_data = pyscroll.TiledMapData(tmx_data) # Make the scrolling layer screen_size = (400, 400) map_layer = pyscroll.BufferedRenderer(map_data, screen_size) # make the PyGame SpriteGroup with a scrolling map group = pyscroll.PyscrollGroup(map_layer=map_layer) # Add sprites to the group group.add(sprite) # Center the layer and sprites on a sprite group.center(sprite.rect.center) # Draw the layer # If the map covers the entire screen, do not clear the screen: # Clearing the screen is not needed since the map will clear it when drawn # This map covers the screen, so no clearing! group.draw(screen) # adjust the zoom (out) map_layer.zoom = .5 # adjust the zoom (in) map_layer.zoom = 2.0
Adapting Existing Games / Map DataAdapting Existing Games / Map Data
pyscroll can be used with existing map data, but you will have to create a class to interact with pyscroll or adapt your data handler. Try to make it follow the same API as the TiledMapData adapter and you should be fine.
There is a good possibility that tile animations will not work for custom map types (only tested with pytmx). I will investigate this in the future.
The following does not require pytmx, you can use your own data format.
Give pyscroll surface to layer into the mapGive pyscroll surface to layer into the map
pyscroll can use a list of surfaces and render them on the map, taking account their layer position.
map_layer = pyscroll.BufferedRenderer(map_data, map_size) # just an example for clarity. here's a made up game engine: def game_engine_draw(): surfaces = list() for game_object in my_game_engine: # pyscroll uses normal pygame surfaces. surface = game_object.get_surface() # pyscroll will draw surfaces in screen coordinates, so translate them # you need to use a rect to handle tiles that cover surfaces. rect = game_object.get_screen_rect() # the list called 'surfaces' is required for pyscroll # notice the layer. this determines which layers the sprite will cover. # layer numbers higher than this will cover the surface surfaces.append((surface, rect, game_object.layer)) # tell pyscroll to draw to the screen, and use the surfaces supplied map_layer.draw(screen, screen.get_rect(), surfaces)
FAQFAQ
Why are tiles repeating while scrolling?Why are tiles repeating while scrolling?
Pyscroll by default will not handle maps that are not completely filled with tiles. This is in consideration of drawing speed. To clarify, you can have several layers, some layers without tiles, and that is fine; the problem is when there are empty spaces in all the layers, leaving gaps in the entire map. There are two ways to fix this issue with the 1st solution being the best performance wise.
1. In Tiled (or your data), fill in the empty spots with a tile1. In Tiled (or your data), fill in the empty spots with a tile
For best performance, you must have a tile in each part of the map. You can create a simple background layer, and fill with single color tiles where there are gaps. Pyscroll is very fast even with several layers, so there is virtually no penalty.
2. Pass "alpha=True" to the BufferedRenderer constructor.2. Pass "alpha=True" to the BufferedRenderer constructor.
All internal buffers will now support 'per-pixel alpha' and the areas without tiles will be fully transparent. You may still have graphical oddities depending on if you clear the screen or not, so you may have to experiment here. Since per-pixel alpha buffers are used, overall performance will be reduced.
Why are there obvious/ugly 'streaks' when scrolling?Why are there obvious/ugly 'streaks' when scrolling?
Streaks are caused by missing tiles. See the above answer for solutions.
Can I blit anything 'under' the scrolling map layer?Can I blit anything 'under' the scrolling map layer?
Yes! There are two ways to handle this situation...both are experimental, but should work. These options will cause the renderer to do more housekeeping, actively clearing empty spaces in the buffer, so overall performance will be reduced.
1. Pass "alpha=True" to the constructor.1. Pass "alpha=True" to the constructor.
When drawing the screen, first blit what you want to be under the map (like a background, or parallax layer), then draw the pyscroll renderer or group. Since per-pixel alpha buffers are used, overall performance will be reduced.
2. Set a colorkey.2. Set a colorkey.
Pass "colorkey=theColorYouWant" to the BufferedRenderer constructor. In theory, you can now blit the map layer over other surfaces with transparency, but beware that it will produce some nasty side effects:
- Overall, performance will be reduced, as empty ares are being filled with the colorkey color.
- If mixing 'per-pixel alpha' tilesets, the edges of your tiles may be discolored and look wrong.
Does the map layer support transparency?Does the map layer support transparency?
Yes...and no. By default, pyscroll handles all transparency types very well for the tiles and you should not have issues with that. However, if you are trying to blit/draw the map over existing graphics and "see through" transparent areas, then you will have to use the "alpha", or "colorkey" methods described above.
Does pyscroll support parallax layers?Does pyscroll support parallax layers?
Yes/no. Because there is no direct support in the primary editor, Tiled, I have not implemented an API for it. However, you can build you own parallax effects by passing "alpha=True" to the BufferedRenderer constructor. Then it is just a matter of scrolling at different speeds. Be warned, that rendering alpha layers is much slower. | https://libraries.io/pypi/pyscroll | CC-MAIN-2018-26 | refinedweb | 1,420 | 65.93 |
MLflow is one of the latest open source projects added to the Apache Spark ecosystem by databricks. Its first debut was at the Spark + AI Summit 2018. The source code is hosted in the mlflow GitHub repo and is still in the alpha release stage. The current version is 0.4.1 and was released on 08/03/2018.
Blogs and meetups from databricks describe MLflow and its roadmap, including Introducing MLflow: an Open Source Machine Learning Platform and MLflow: Infrastructure for a Complete Machine Learning Life Cycle. Users and developers can find useful information to try out MLflow and further contribute to the project.
However, this blog will dig further into MLflow and describe some specifics based on my first-hand experience and the study of the source code. I also provide suggestions on areas where I think MLflow can be improved.
What is MLflow
MLflow is described as an open source platform for the complete machine learning lifecycle. A complete machine learning lifecycle includes raw data ingestion, data analysis and preparation, model training, model evaluation, model deployment, and model maintenance. MLflow is built as a Python package and provides open REST APIs and commands to:
- Log important parameters, metrics, and other data that is important to the machine learning model
- Track the environment a model is run on
- Run any machine learning codes on that environment
- Deploy and export models to various platforms with multiple packaging formats
MLflow is implemented as several modules, where each module supports a specific function.
MLflow components
Currently, MLflow has three components, as shown (source: Introducing MLflow: an Open Source Machine Learning Platform).
More information on each component can be found in the previous link as well as the link to the MLflow Documentation. The rest of this section gives a high-level overview of the features and implementation of each component.
Tracking
The
Tracking component implements REST APIs and the UI for parameters, metrics, artifacts, and source logging and viewing. The back end is implemented with Flask and run on the gunicorn HTTP server while the UI is implemented with React.
The Python module for tracking is
mlflow.tracking.
Each time users train a model on the machine learning platform, MLflow creates a
Run and saves the
RunInfo meta information onto a disk. Python APIs log parameters and metrics for a
Run. The output of the run, such as the model, are saved in the
artifacts for a
Run. Each individual
Run is grouped into an
Experiment. The following class diagram shows classes that are defined in MLflow to support tracking functions.
The model training source code needs to call MLflow APIs to log the data to be tracked. For example, calling
log_metric to log the metrics and
log_param to log the parameters.
The MLflow tracking server currently uses a file system to persist all
Experiment data. The directory structure looks like:
mlruns └── 0 ├── 7003d550294e4755a65569dd846a7ca6 │ ├── artifacts │ │ └── test.txt │ ├── meta.yaml │ ├── metrics │ │ └── foo │ └── params │ └── param1 └── meta.yaml
Every
Run can be viewed through the UI browser that connects to the tracking server.
Users can search and filter models with
metrics and
params, and compare and retrieve model details.
Projects
The
Projects component defines the specification on how to run the model training code. It includes the platform configuration, the dependencies, the source code, and the data that allow the model training to be executed through MLflow. The following code is an example provided by MLflow.
name: tutorial conda_env: conda.yaml entry_points: main: parameters: alpha: float l1_ratio: {type: float, default: 0.1} command: "python train.py {alpha} {l1_ratio}"
The
mlflow run command looks for the
MLproject file for the spec and downloads the dependencies, if needed. It then runs the model training with the source code and the data specified in the
MLproject.
mlflow run mlflow/example/tutorial -P alpha=0.4
The
MLproject specifies the command to run the source code. Therefore, the source code can be in any language, including Python. Projects can be run on many machine learning platforms, including TensorFlow, PySpark, scikit-learn, and others. If the dependent Python packages are available to download by Anaconda, they can be added to the
conda.yaml file and MLflow sets up the packages automatically.
Models
The
Models component defines the general model format in the
MLmodel file as follows:
artifact_path: model flavors: python_function: data: model.pkl loader_module: mlflow.sklearn sklearn: pickled_model: model.pkl sklearn_version: 0.19.1 run_id: 0927ac17b2954dc0b4d944e6834817fd utc_time_created: '2018-08-06 18:38:16.294557'
It specifies different
flavors for different tools to deploy and load the model. This allows the model to be saved in its original binary persistence output from the platform training the model. For example, in scikit-learn, the model is serialized with the Python
pickle package. The model can then be deployed to the environment that understands this format. With the
sklearn flavor, if the environment has the scikit-learn installed, it can directly load the model and serve. Otherwise, with the
python_function flavor, MLflow provides the
mlflow.sklearn Python module as the helper to load the model.
So far, MLflow supports models load, save, and deployment with scikit-learn, TensorFlow, SageMaker, H2O, Azure, and Spark platforms.
With MLflow‘s modular design, the current
Tracking,
Projects, and
Models components touch most parts of the machine learning lifecycle. You can also choose to use one component but not the others. With its REST APIs, these components can also be easily integrated into other machine learning workflows.
Experiencing MLflow
Installing MLflow is quick and easy if Anaconda has been installed and a virtual environment has been created.
pip install mlflow installs the latest MLflow release.
To train the model with
TensorFlow, run
pip install tensorflow to install the latest version of
TensorFlow.
A simple example to train a TensorFlow model with following code tf-example.py is:
import tensorflow as tf from tensorflow import keras import numpy as np import mlflow from mlflow import tracking # load dataset dataset = np.loadtxt("/Users/wzhuang/housing.csv", delimiter=",") # save the data as artifact mlflow.log_artifact("/Users/wzhuang/housing.csv") # split the features and label X = dataset[:, 0:15] Y = dataset[:, 15] # define the model first_layer_dense = 64 second_layer_dense = 64 model = keras.Sequential([ keras.layers.Dense(first_layer_dense, activation=tf.nn.relu, input_shape=(X.shape[1],)), keras.layers.Dense(second_layer_dense, activation=tf.nn.relu), keras.layers.Dense(1) ]) # log some parameters mlflow.log_param("First_layer_dense", first_layer_dense) mlflow.log_param("Second_layer_dense", second_layer_dense) optimizer = tf.train.RMSPropOptimizer(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae']) # train model.fit(X, Y, epochs=500, validation_split=0.2, verbose=0) # log the model artifact model_json = model.to_json() with open("model.json", "w") as json_file: json_file.write(model_json) mlflow.log_artifact("model.json")
The first call to the tracking API starts the tracking server and logs all of the data sent through the current and subsequent APIs. This logged data can then be viewed in the MLflow UI. From the previous example, it’s easy to just call the logging APIs in any place you want to track.
Packaging this project is also very simple by creating an MLproject file such as:
name: tf-example conda_env: conda.yaml entry_points: main: command: "python tf-example.py"
with conda.yaml
name: tf-example channels: - defaults dependencies: - python=3.6 - numpy=1.14.3 - pip: - mlflow - tensorflow
Then
mlflow run tf-example runs the project on any environment. It first creates a
conda environment with the required Python packages installed and then runs the tf-example.py inside that virtual environment. As expected, the run result is also logged to the MLflow tracking server.
MLflow also comes with a server implementation where the
sklearn and other types of models can be deployed and served. The MLflow github README.md illustrates its usage. However, to deploy and serve the model built by the previous example requires new code that understands Keras models. This is beyond this blog’s scope.
To summarize, the experience with MLflow is smooth. There were several bugs here and there but overall I was satisfied with what the project claims to be. Of course, because MLflow is still in its alpha phase, bugs and the lack of some features are to be expected.
Areas where MLflow can be enhanced
MLflow provides an open source solution to track the data science processing, package, and deploy machine learning model. As it claims, it targets the management of the machine learning lifecycle. The current alpha version releases the
Tracking,
Projects, and
Models components that tackle individual stages of the machine learning workflow. The tool is compact in Python language while providing APIs and a UI to be integrated easily with any machine learning platform.
However, there are still many places that MLflow can be improved. There are also new features that are required for the tool to fully manage and monitor all aspects of the lifecycle of machine learning.
At the databricks’ meetup on 07/19/2018, several items were mentioned in the longer-term road map of MLflow according to the presentation. There are four categories: improving current components, a new MLflow Data component, hyperparameter tuning, and language and library integrations. Some items are really important so they need more explanation.
Implementing a database back end for the
Tracking component is included in the first category. As previously mentioned, the MLflow tracking server logs information for every run in the local file system. This looks like a quick and easy implementation. A better solution would be using a database as the tracking store. When the number of machine learning runs grows, databases have obvious advantages with data queries and retrieval.
Model metadata support is also included in the first category. This is extremely important. The current
Tracking component does not describe the model, and all runs are viewed as a flatten list ordered by date. The tool allows the search based on the parameters and metrics, but it’s not enough. I would like to quickly retrieve the models by model name, algorithm, platform, and so on. This requires metadata input when a model training is tracked. The Tracking server logs the file name of the source code, but this does not provide any value in identifying a model. Instead, it should allow the input of a description of the model. Furthermore, the access control is also essential and can be part of the metadata. And model management should also have versioning support.
In the second category, MLflow will introduce a new
Data component. It will build on top of Spark‘s Data Source API and allows projects to load data from many formats. This can be viewed as an effort to tighten the MLflow relationship with Spark. What should be done further is, of course, maintaining the metadata for the data.
In the fourth category, the integration with R and Java is also important. Although Python is one of the most adopted languages in machine learning, there are still many data scientists using R and other languages. MLflow needs to provide R and Java APIs so those machine learning workflows can be managed as well.
There are other important features not included in the current roadmap. From my viewpoint, the following list of items are also needed and can help complete MLflow as a full machine learning data and model management tool.
Register APIs
MLflow provides the APIs to log run information. These APIs must be called inside of the model training source code and they are called at runtime. This approach becomes inconvenient. You either want to track the previous runs without these APIs or runs without access to the source code. To solve this problem, a set of REST APIs that can be called after the run to register the run information would be very helpful. The run information, such as parameters, metrics, and artifacts, can be part of the JSON input.
UI view enhancement
In the
ExperimentsUI view, the
Parametersand
Metricscolumns display all parameters and metrics for all runs. The row becomes unfriendly, long, and difficult to view when more types of parameters and metrics are tracked. Instead, for each run, the view should display a hyperlink to the detailed run info where the parameters and metrics are shown only for that run.
Artifact location
MLflow can take artifacts from either local or GitHub. It would be a great improvement to support the load and save data, source code, and model from other sources like S3 Object Storage, HDFS, Nexus, and so on.
Import and export
After the tracking store is implemented with a database as the back end, the next thing will be to support the import and export of all experiments stored in different databases.
Run projects remotely
The
Projectscomponent specifies the command to run the project and the command is displayed in the tracking UI. But because the project can run only on the specific machine learning platform, which can be different from the tracking server, you still must connect to the platform remotely and issue the command line. The
MLprojectspecification should include the platform information such as the hostname and credentials. With this information, the tracking UI should add an action to kick off the run through the UI.
Tuning
Adding the parameter tuning functionality through the tracking UI is an important feature. You will be allowed to change the parameters and kick off the run if the project is tracked by the
Projectscomponent.
Common model format
The
Modelscomponent defines
flavorsfor a model. However, every model is still stored in its original format only understood by that training tool. There is a gap between the model development and production. Portable Format for Analytics is a specification that can help bridge the gap.
MLmodecan be improved to understand PFA or convert models into PFA for easy deploying models to PFA-enabled platforms.
Pipeline integration
A complete machine learning lifecycle also includes data preparation and other pipelines. MLflow so far only tracks the training step. The
MLprojectcan be enhanced to include the specifications of other pipelines. Some pipelines can be shared by projects as well.
Summary
In this blog, I’ve described MLFlow and provided some specifics based on my experience in using the project. I’ve explained some features of MLflow and also provided suggestions on areas where I think MLflow can be improved. | https://developer.ibm.com/articles/first-impressions-mlflow/ | CC-MAIN-2022-05 | refinedweb | 2,379 | 56.76 |
For more resources on Plone here.)
We will first inspect a few structural changes and install them, and then finally examine the various components and skin layer items that have been changed, one at a time. Where restarting Zope or rerunning your buildout would be required, this will be noted.
About the theme
This theme and its design are available for personal and professional use to anyone, and can be freely modified. You can (and should) download the files from using the following command:
svn co
Note the space between the words trunk and plonetheme.guria. This theme is intended for installation on Plone 3 web sites. The finished theme should look like the following, but we have work to do to make this happen:
This theme was created by me, for use by a charity group in India, called Guria (), dedicated to ending human trafficking and prostitution. The finished site is currently in development, and is generously hosted free of charge by the talented folks at Six Feet Up (sixfeetup.com). Additionally, most of the code and lessons learned come courtesy of similar themes created by the staff at ONE/Northwest in Seattle, Washington.
The design for this theme was created with the assumption that most of the tasks would need to be present in this theme. In fact, the only task not covered here is the creation of a new viewlet manager. Creation of viewlet managers is discussed at and.
Creating a theme product
I created a theme product named plonetheme.guria, using the command line syntax paster create –t plone3_theme, while we were located in the src/ directory of our buildout, as seen next:
[bash: /opt/mybuildout/src] paster create -t plone3_theme
plonetheme.guria
Selected and implied templates:
ZopeSkel#basic_namespace A project with a namespace package
ZopeSkel#plone A Plone project
ZopeSkel#plone3_theme A Theme for Plone 3.0
Variables:
egg: plonetheme.guria
package: plonethemeguria
project: plonetheme.guria
Enter namespace_package (Namespace package (like plonetheme))
['plonetheme']:
Enter package (The package contained namespace package (like
example)) ['example']: guria
Enter skinname (The skin selection to be added to 'portal_skins'
(like 'My Theme')) ['']: Guria Theme for the Plone Theming Book) ['0.1']:
Enter description (One-line description of the package) ['An
installable theme for Plone 3.0']:
Enter long_description (Multi-line description (in reST)) ['']:
Enter author (Author name) ['Plone Collective']: Veda Williams
Enter author_email (Author email) ['product-developers@lists.
plone.org']:
Enter keywords (Space-separated keywords/tags) ['web zope plone
theme']:
Enter url (URL of homepage) ['']:
Enter license_name (License name) ['GPL']:
Enter zip_safe (True/False: if the package can be distributed as a
.zip file) [False]:
Creating template basic_namespace
Creating directory ./plonetheme.guria
[snip]
You may wish to generate a new Plone theme product yourself, so that you can compare and contrast the differences between the Guria theme and a vanilla Plone theme.
Notice that the full name of the theme is plonetheme.guria, and where an item shows as blank, it defaults to the example value in that step. In other words, the namespace package defaults to plonetheme, because there was no reason to change it. The skinname is set to a single lowercase word out of stylistic preference. It's important to also note that you should not use hyphens or spaces in your theme names, as they will not be recognized by your buildout.
We've chosen not to override Plone's default stylesheets, and instead, we want to build on top of Plone's default (and excellent!) stylesheets. I prefer this method mostly because the layout needed for Plone's Contents view and other complex structural pieces are already taken care of by Plone's base stylesheets. It's easier than trying to rebuild those from scratch every time, but this is merely a personal preference.
Following the creation of the theme, we register the theme product in our buildout.cfg, using the following syntax
[buildout]
...
develop =
src/plonetheme.guria
...
[instance]
eggs =
plonetheme.guria
...
zcml =
plonetheme.guria
...
If we were using the eggtractor egg, there would be no need to add these lines of code to our buildout.cfg; all we would need to do is rebuild our buildout and it would automatically recognize the new egg. eggtractor can be found at, and is documented thoroughly.
Assuming we are not using eggtractor, we must rebuild our buildout, as we have altered ZCML code and added a new egg:
[bash: /opt/mybuildout/src/] ./bin/buildout
This would be a good time to check your vanilla theme product into Subversion, so that you can track back to the original version, if needed. However, since this is an existing theme, there is no need to do so.
For the purposes of following along, it might be best if you do not yet install the theme. We want to make some changes first. However, we will point out some caveats along the way, in case you installed the theme prematurely.
Altering the theme product's structure
Several modifications have been made to the theme product's structure to shorten folder names and change the default behavior. Again, this is mostly a personal preference. Let's take a look at these changes and how they were achieved.
Renaming the theme
In our theme product, you will see a file named profiles.zcml, located at mybuildout/src/plonetheme.guria/plonetheme/guria/profiles.zcml. The code looks like this:
<configure
xmlns=""
xmlns:
<genericsetup:registerProfile
name="default"
title="Guria Theme for the Plone Theming Book"
directory="profiles/default"
description='Extension profile for the "Guria Theme for the
Plone Theming Book" Plone theme.'
provides="Products.GenericSetup.interfaces.EXTENSION"
/>
</configure>
If you named your theme in a way that was less descriptive, you could alter the title. Naming your theme product properly is important, because you may have different types of products used for a given web site—for example, a policy product for content that might be used in tandem with your theme product. This text is what you see in the portal_quickinstaller at, where mysite is the name of your Plone site. You can also see this name if you install your theme product via Site Setup Add-on Products|, found at.
If you change your XML here, and your theme product is already installed, you'll need to start (or restart) your Zope instance, using:
[bash: /opt/mybuildout] ./bin/instance fg
Shortening folder names
Next, we look at the folder structure of our theme product. The standard Plone 3 theme produces folders with names like plonetheme_guria_custom_images, plonetheme_guria_custom_templates, and plonetheme_guria_styles. While there is nothing wrong with keeping this structure, it can be cumbersome to type or tab through (especially when checking items into Subversion). However, you might want to keep the existing folder names to help you distinguish which items of base Plone you modified. This can make migrations easier. If you choose this route, you probably want to create additional folders for non-base-Plone items. I personally prefer the shorter folder names and don't worry too much about the migration issues.
In the case of this theme product, I opted to make the folder names shorter. First, I altered the names of the folders in the skins/ folder to guria_images, guria_styles, and guria_templates.
Then, in the theme, go to mybuildout/plonetheme.guria/plonetheme/guria/skins.zcml. The code in this file is altered to appear as follows:
<configure
xmlns=""
xmlns:
<!-- File System Directory Views registration -->
<cmf:registerDirectory
<cmf:registerDirectory
<cmf:registerDirectory
</configure>
One more step is required here. In plonetheme.guria/plonetheme/guria/profiles/default/skins.xml, the code is changed to read as follows:
<?xml version="1.0"?>
<object name="portal_skins" allow_any="False"
cookie_persistence="False"
default_skin=" Guria Theme for the Plone Theming Book ">
<object name="guria_images"
meta_type="Filesystem Directory View"
directory="plonetheme.guria:skins/guria_images"/>
<object name="guria_templates"
meta_type="Filesystem Directory View"
directory="plonetheme.guria:skins/guria_templates"/>
<object name="guria_styles"
meta_type="Filesystem Directory View"
directory="plonetheme.guria:skins/guria_styles"/>
<skin-path
<layer name="guria_images"
insert-
<layer name="guria_templates"
insert-
<layer name="guria_styles"
insert-
</skin-path>
</object>
Basically, the steps are the following:
- Rename the folders on the filesystem.
- Modify the skins.zcml file to change the name of the filesystem directory view (what you see in the portal_skins/properties area of the ZMI).
- Modify the skins.xml file in the profiles/default folder to match. This alters the basic profile of your theme product.
If you wanted to add additional folders and filesystem directory views here (a scripts/ folder, for example), you'd just add code by following the conventions given to you in these files and then create additional folders.
Making changes to the ZCML file means that you would need to do a restart of your Zope instance.
If you installed your theme product before making the changes to the skin layer names, you might want to inspect the skin layers at portal_skins/manage_propertiesForm, to make sure that the correct skin layers are listed. You might even need to reimport the "skins tool" step via portal_setup at. Make sure you choose the correct profile first by choosing your theme product's name from the drop-down list at the top of the import page. The theme product's name is the same name as you find in your profiles.zcml file.
Adjusting how stylesheets and images are used
Next, we remove some of the default behavior given to us by the plone3_theme recipe. In a vanilla theme product, folders named images/ and stylesheets/ are inserted into the plonetheme.guria/plonetheme/guria/browser/ directory. Additionally, a file named main.css is included in the stylesheets/ directory.
I chose not to place the theme's images or stylesheets in the browser/ directory, as this is generally unnecessary for most themes. Advanced programmers may wish to expose these items to the browser layer, but this is generally a personal choice and carries with it additional consequences.
I deleted the folders mentioned above, as well as the i file. Then, I opened the file named configure.zcml, located at plonetheme.guria/plonetheme/guria/browser/, and removed all of the following boilerplate text:
<!-- Viewlets registration -->
<!-- Zope 3 browser resources -->
<!-- Resource directory for images -->
<browser:resourceDirectory
<!-- Resource directory for stylesheets -->
<browser:resourceDirectory
I then removed the highlighted code below fromI then removed the highlighted code below from plonetheme.guria/plonetheme/guria/profiles/default/cssregistry.xml::
<stylesheet title=""
id="++resource++plonetheme.guria.stylesheets/main.css"
media="screen" rel="stylesheet" rendering="import"
cacheable="True" compression="safe" cookable="True"
enabled="1" expression=""/>
And replaced it with the following:
<stylesheet title=""
id="guria.css"
media="screen" rel="stylesheet" rendering="import"
cacheable="True" compression="safe" cookable="True"
enabled="1" expression=""/>
This, in effect, tells our theme product that we will be using a stylesheet named guria.css (or more correctly, guria.css.dtml, as we'll see in a moment). This stylesheet does not yet exist, so we have to create it.
I wanted the option of making use of the DTML behavior provided by Plone, so that I could use certain base properties provided to us via the base_properties.props file (also located in our skins/guria_styles/ folder). DTML essentially allows us to use property-sheet variables and apply changes on a more global scale. The easiest way to create this new stylesheet is to go to your mybuildout/buildout-cache/eggs/Plone[some version number]/Products/CMFPlone/skins/plone_styles/ploneCustom.css and copy the contents of that file into a new stylesheet (named guria.css.dtml) in your theme's guria_styles/ folder (located in the skins/ directory at mybuildout/plonetheme.guria/plonetheme/guria/skins/guria_styles). The important bits of code you want are as follows:
/* <dtml-with base_properties> (do not remove this :) */
/* <dtml-call "REQUEST.set('portal_url', portal_url())"> (not this
either :) */
/* DELETE THIS LINE AND PUT YOUR CUSTOM STUFF HERE */
/* </dtml-with> */
Again, we would need to restart our Zope at this point, as we have modified our ZCML.
If we had already installed our theme product, we'd also have to import our cssregistry.xml file via portal_setup in the ZMI, to capture the new GenericSetup profile settings. However, we have not yet installed the product, so we do not need to worry about this.
For more resources on Plone here.)
Installing the theme product
Now that we've looked a few of the changes we've made to distinguish our theme product from a default Plone theme, let's go ahead and install it. Some of you may already have installed your theme product, and that's okay.
Go to your Zope instance (for example,), and choose Plone Site from the drop-down list on the top right. Or, you can go to this URL:.
You will then see the following screen. For the purposes of this article, we are calling our Plone site mysite. Make sure you add a description for your Plone site, as we'll need that later.
There are three ways to install your theme product:
- You could optionally choose the theme product from the Extension Profiles list (as seen in the previous screenshot)
We could proceed to the portal_quickinstaller tool, located in the ZMI at
We could go to Site Setup Add-on Products| at
Let's select the extension profile named Guria Theme for the Plone Theming Book and click the Install button.
At this point, we should also put our site's portal_css in debug mode, so that we can see any CSS changes instantly. You should not leave a production site in debug mode, as it can negatively impact performance. You can reach the portal_css area at. Simply select the Debug/development mode checkbox and press Save. The Save button may appear at the bottom of the page:
Now, if you visit your site's home page, you should see the installed product as seen next, but we need to do a few things to make it look fully formed.
Adjusting web site content to support the design
As we can see, the installed theme does not exactly match the look of the first screenshot of this article. Many CSS styles are in place, but the searchbox is in the wrong location, and there is a calendar portlet present. You may also notice that breadcrumbs are not present, as they are suppressed using CSS styles. Additionally, the center page content is not yet populated.
To make the site look more realistic, we need to adjust our viewlets, as well as add and suppress some content and portlets on the web site to support the design.
First, let's adjust the viewlets on the site by going to. We have to do this because the Guria theme does not use ordering to organize the viewlets. At the time that the theme was created, ordering was not functional, but should be now. Using the up arrow, next to the searchbox with the orange "Go" button, you can move the viewlet directly below the viewlet called ViewletManager: plone.portaltop (plone.app.layout.viewlets.interfaces.IPortalTop):
This should move the searchbox into the proper location.
Next, we want to add five new folders. To do so, use the Add menu located on the home page, and choose the Folder option. You should create some sub-navigation items (pages or folders are easiest) for at least one of these sections to see the styling of sub-navigation items. Make sure you publish each of these items.
Once we have added a few folders, we need to adjust the settings of the navigation portlet. Click on the Manage portlets link on the bottom-right of the screen while on the home page, or go to.
The navigation portlet has been added to the site by default. Click on the Navigation portlet link. You will see the following screen:
Give the portlet a name and set the start level to 0. This will allow the navigation portlet to show on the home page. Choose Save, and then on the main @@manage-portlets screen, we want to remove the right-hand portlets. As you can see, these include review List, News, Events, and calendar:
Click on the X next to each item to remove each portlet, then click on the Plone logo to return to the home page. You should see the left-hand navigation portlet and no portlets on the right-hand side of the page. This gives us what we need to finish building out our design.
The Guria theme was intended to use the homepage_view page template, as seen in the theme product's skins/guria_templates folder. This view requires the creation of a folder in the root of the site, called homepage, plus several pages (or collections) named slot1, slot2, and slot3. You can optionally create these too if you want to use this particular view.
The alternate view, homepage2_view, also located in the skins/guria_templates folder, requires the creation of a folder named homesection (slightly different from the homepage_view example, just to show the difference between the two views), plus the creation of several pages (not collections) named r1c1 and r1c2. Make sure these names are the shortnames, not just the title of the pages.
Summary
In this article, we have learned how to:
- Create a custom theme product
- Modify the file structure
- Set up a Plone theme to use mostly skin layers for images and stylesheets
- Install the theme product
- Customize the content of your site to support the design, Customizing, and Assigning Portlets Automatically for Plone 3.3 [article]
- Creating a Custom Content] | https://www.packtpub.com/books/content/creating-installing-and-tweaking-your-theme-using-plone-3 | CC-MAIN-2015-22 | refinedweb | 2,914 | 62.58 |
Use Case - Positioners and Layouts In QML
There are several ways to position items in QML.
Below is a brief overview. For more details, see Important Concepts In Qt Quick - Positioning.
Manual Positioning
Items can be placed at specific x,y coordinates on the screen by setting their x,y properties. This will setup their position relative to the top left corner of their parent, according to the visual coordinate system rules.
Combined with using bindings instead of constant values for these properties, relative positioning is also easily accomplished by setting the x and y coordinates to the appropriate bindings.
import QtQuick 2.3 Item { width: 100; height: 100 Rectangle { // Manually positioned at 20,20 x: 20 y: 20 width: 80 height: 80 color: "red" } }
Anchors
The
Item type provides the abilitiy to anchor to other Item types. There are seven anchor lines for each item: left, right, vertical center, top, bottom, baseline and horizontal center. The three vertical anchor lines can be anchored to any of the three vertical anchor lines of another item, and the four horizontal anchor lines can be anchored to the horizontal anchor lines of another item.
For full details, see Positioning with Anchors and the documentation of the anchors property.
import QtQuick 2.3 Item { width: 200; height: 200 Rectangle { // Anchored to 20px off the top right corner of the parent anchors.right: parent.right anchors.top: parent.top anchors.margins: 20 // Sets all margins at once width: 80 height: 80 color: "orange" } Rectangle { // Anchored to 20px off the top center corner of the parent. // Notice the different group property syntax for 'anchors' compared to // the previous Rectangle. Both are valid. anchors { horizontalCenter: parent.horizontalCenter; top: parent.top; topMargin: 20 } width: 80 height: 80 color: "green" } }
Positioners
For the common case of wanting to position a set of types in a regular pattern, Qt Quick provides some positioner types. Items placed in a positioner are automatically positioned in some way; for example, a Row positions items to be horizontally adjacent (forming a row).
For full details see Item Positioners and the documentation for the positioner types.
import QtQuick 2.3 Item { width: 300; height: 100 Row { // The "Row" type lays out its child items in a horizontal line spacing: 20 // Places 20px of space between items Rectangle { width: 80; height: 80; color: "red" } Rectangle { width: 80; height: 80; color: "green" } Rectangle { width: 80; height: 80; color: "blue" } } }
Layout Types
Layout types function in a similar way as positioners but allow further refinement or restrictions to the layout. Specifically, the layout types allow you to:
- set the alignment of text and other items
- resize and fill the allotted application areas automatically
- set size constraints such as minimum or maximum dimensions
- set the spacing between items within the layout } } }
The snippet above comes from the Basic Layouts example. The snippet shows the simplicity of adding various fields and items in a layout. The GridLayout can be resized and its format are customizable through various properties.
For more information about the layout types, visit:
Note: Qt Quick Layouts was introduced in Qt 5.1 and requires Qt Quick. | https://doc.qt.io/archives/qt-5.10/qtquick-usecase-layouts.html | CC-MAIN-2019-18 | refinedweb | 521 | 53.1 |
This document discusses some of the similarities and differences between Java and C#, in order to get a grasp of what's involved when migrating to .NET. The key similarities between Java and C# are:
try...catch
System.Exception
Let's now look at the important differences that we'll cover in this document:
Source File Conventions Top-level Declarations Fully Qualified Names and Namespace Aliases Pre-Processing DirectivesLanguage Syntax Converting and Casting Value and Reference Types Boxing and UnboxingOperatorsFlow ControlClass Fundamentals Access ModifiersThe Main() MethodPassing by ReferenceUsing an Indeterminate Number of ParametersPropertiesStructsArrays in C#Inheritance and Derived ClassesMethod OverridingAdvanced C# TechniquesGarbage CollectionSummary
There are some differences in the file naming conventions and structure of source programs in the two languages that we need to be aware of.
The naming convention for files containing C# classes is a little different from Java. Firstly, in Java, all source files have a .java extension. Each source file contains one top-level public class declaration, and the class name must match the filename. In other words, a class called Customer declared with public scope must be defined in a source file with the nameCustomer.java
Customer
Customer.java
C# source code on the other hand is denoted by the .cs extension. Unlike Java, source files can contain more than one top-level public class declaration, and the filename doesn't need to match any of the classes' names.
In both Java and C#, source code begins with a few top-level declarations in a certain sequence. There are only a few differences in the declarations made in Java and C# programs.
In Java, we can group classes together with the package keyword. A packaged class must use the package keyword in the first executable line of the source file. Any import statements required to access classes in other packages appear next, and then comes the class declaration, like so:
package
package <Package name>;import <package hierarchy>.<class name>;class Customer{...}
C# uses the concept of namespaces to group logically related classes through thenamespace keyword. These act similarly to Java packages, and a class with the same name may appear within two different namespaces. To access classes defined in a namespace external to the current one, we use the using keyword followed by the namespace name, as shown below:
namespace
using
using <namespace hierarchy>.<class name>;namespace <namespace name> { class Customer { ... }}
Note that using statements may quite legally be placed inside a namespace declaration, in which case such imported namespaces form part of the containing namespace.
Java does not allow multiple packages in the same source file, while C# does allow multiple namespaces in a single .cs file:
namespace AcmeAccounting{ public class GetDetails { ... }}namespace AcmeFinance{ public class ShowDetails { ... } }
Like Java, we can access classes in both .NET or user defined namespace without ausing reference for that namespace, by providing the fully qualified name for the class, such asSystem.Data.DataSet or AcmeAccounting.GetDetails in the above example.
System.Data.DataSet
AcmeAccounting.GetDetails
Fully qualified names can get long and unwieldy, and in such cases, we can use theusing keyword to specify a short name, or alias, to make our code more readable.
In the following code, an alias is created to refer to code written by a fictional company:
using DataTier = Acme.SQLCode.Client;using System;public class OutputSales{ public static void Main() { int sales = DataTier.GetSales("January"); Console.WriteLine("January's Sales: {0}", sales); }}
Note the syntax forWriteLine(), with {x} in the format string, where x denotes the position in the argument list of the value to insert at that position. Assuming the GetSales() method returned 500, the output of the application would be:
WriteLine()
GetSales()
January's Sales: 500
Similar to C and C++, C# includes pre-processor directives that provide the ability to conditionally skip sections of source files, report error and warning conditions, and to delineate distinct regions of source code. The term "pre-processing directives" is used only for consistency with the C and C++ programming languages as C# does not include a separate pre-processing step. For a full list of C# Pre-processor directives, see C# Pre-processor directives.
In this section, we discuss the similarities and differences between the two languages. Some of the major differences are:
final
const
readonly
struct
finalize()
delegateInteger class wraps the int data type, and the Double class wraps the doubledata type.
Integer
int
Double
double
On the other hand, all primitive data types in C# are objects in the Systemnamespace. For each data type, a short name, or alias, is provided. For instance, int is the short name forSystem.Int32 and double is the short form of System.Double
System
System.Int32
System.Double
The list of C# data types and their aliases is given below. As you will notice, the first 8 of these correspond to the primitive types available in Java. Note however that Java's boolean is calledbool in C#.
boolean
bool
byte
System.Byte
sbyte
System.SByte
-2,147,483,648 to
2,147,483,647
uint
System.UInt32
short
System.Int16
ushort
System.UInt16
long
System.Int64
-922337203685477508 to
922337203685477507
ulong
System.UInt64
float
System.Single
-3.402823e38 to
3.402823e38
-1.79769313486232e308 to
1.79769313486232e308
char
System.Char
System.Boolean
object
System.Object
string
System.String
decimal
System.Decimal
Because C# represents all primitive data types as objects, it is possible to call an object method on a primitive data type. For example:
int i=10;object o=i;Console.WriteLine(o.ToString());
This is achieved with the help of automatic boxing andunboxing. For more information, see Boxing and Unboxing.
Similar to C/C++, and not available in Java, enums or enumerations are used to group named constants. The example below defines a simple Color enumeration.
public enum Color {Green, Orange, Red, Blue}
Integral values can also be assigned to enums as shown in the following enum declaration:
public enum Color {Green=10, Orange=20, Red=30, Blue=40}
The program below calls theGetNames method of the Enum type to display the available constants for an enumeration. It then assigns a value to an enum and displays the value.
GetNames
Enum
using System;public class TypeTest{ public static void Main() { Console.WriteLine("Possible color choices: "); //Enum.GetNames returns a string array of named constants for the enum foreach(string s in Enum.GetNames(typeof(Color))) { Console.WriteLine(s); } Color FavoriteColor = Color.Blue; Console.WriteLine("Favorite Color is {0}",FavoriteColor); Console.WriteLine("Favorite Color value is {0}", (int)FavoriteColor); }}
After running, the program below would display the following output:
Possible color choices:GreenOrangeRedBlueFavorite Color is BlueFavorite Color value is 40 would. We'll discuss value types and references later in this document.
equals()
Just like in Java, C# developers should not use the string type for concatenating strings to avoid the overhead of creating new string classes every time the string is concatenated. Instead, developers can use theStringBuilder class in the System.Textnamespace which is functionally equivalent to the JavaStringBuffer class.
StringBuilder
System.Text
StringBuffer
C# provides the ability to avoid the usage of escape sequences like "\t" for tab or "\" for backslash characters within string constants. To do this, simply declare the verbatim string using the @ symbol to precede the assignment of the string value. The examples below show how to use escape characters and how to assign string literals:
//Using escaped charactersstring path = "\\\\FileShare\\Directory\\file.txt"//Using String Literalsstring escapedPath = @"\\FileShare\Directory\file.txt"
Both Java and C# follow similar rules for automatic conversions and casting of data types.
Like Java, C# supports bothimplicit and explicit type conversions. In the case of widening conversions, the conversions are implicit. For example, the following conversion fromint to long is implicit, as in Java:
int intVariable = 5; long l = intVariable;
Below is a list of implicit conversions between .NET data types:
short, ushort, int, uint, long, ulong, float, double, or decimal
short, int, long, float, double, or decimal
long, float, double, or decimal
long, ulong, float, double, or decimal
int, long, float, double, or decimal
int, uint, long, ulong, float, double, or decimal
float, double, or decimal
ushort, int, uint, long, ulong, float, double, or decimal
We cast expressions that we wish to explicitly convert using the same syntax as Java:
long longVariable = 5483;int intVariable = (int)longVariable;
sbyte or char
byte, ushort, uint, ulong, or char
sbyte, byte, short, ushort, uint, ulong, or char
sbyte, byte, short, ushort, int, or char
sbyte, byte, ushort, uint, ulong, or char
sbyte, byte, short, or char
sbyte, byte, short, ushort, int, uint, ulong, or char
sbyte, byte, short, ushort, int, uint, long, or char
sbyte, byte, short, ushort, int, uint, long, ulong, char, ordecimal
sbyte, byte, short, ushort, int, uint, long, ulong, char, float, or decimal
sbyte, byte, or short
sbyte, byte, short, ushort, int, uint, long, ulong, char, float, or double
C# supports two kinds of variable types:
Let's explore this a little further. If we create two value type variables, i andj, like so:
i
j
int i = 10; int j = 20;
Figure 1: Memory locations for value types
then i andj are completely independent of each other; they are given separate memory locations:
If we change the value of one of these variables, the other will naturally not be affected. For instance, if we have an expression such as this:
int k = i;
then there is still no connection between the variables. That is, if we then change the value of i and k will remain at the value that i had at the time of the assignment.
i and k
Reference types however act differently. For instance, we could declare two variables like so:
myClass a = new myClass(); myClass b = a;
Now because classes are reference types in C#, a is known as a reference tomyClass The first of the above two lines creates an instance of myClass in memory, and sets ato reference it. Thus, when we set b to equal a, it contains a duplicate of the reference to the class in memory. If we now change properties on b, properties on a would reflect these changes, because both point to the same object in memory, as shown in this figure:
a
myClass
b
Figure 2: Memory locations for reference types
The process of converting a value type to a reference type is called boxing. The inverse process, converting a reference type to a value type, is called unboxing. This is illustrated in the following code:
int valueVariable = 10;// boxingobject obj = refVariable;// unboxingint valueVariable = (int) refVariable;
Java requires us to perform such conversions manually. Primitive data types may be converted into objects of wrapper classes by constructing such objects (boxing). Similarly, the values of primitive data types may be extracted from the objects of wrapper classes by calling an appropriate method on such objects (unboxing). For more information on boxing, see Boxing Conversion, for more information on unboxing see Unboxing Conversion.
C# offers all applicable operators supported by Java, as listed in the following table. At the end of the table, you'll see some new operators available in C# but not Java:
++ -- + - ! ~ ()
* / %
+ -
<< >>
< > <= >= instanceof
== !=
&
^
|
&&
||
? :
= *= /= %= += -= <<= >>= &= ^= |=
typeof
sizeof
checked
unchecked
The only Java operator not available in C# is the >>> shift operator. This operator is present in Java as a consequence of the lack of unsigned variables in that language, for cases when right- shifting is required to insert 1s in the most significant bits.
>>>
C# however does support unsigned variables, and thus C# only needs the standard>> operator. This operator produces different results depending on whether the operand is signed or unsigned. Right-shifting an unsigned number inserts 0 in the most significant bit while right-shifting a signed number copies the previous most significant bit.
>>
Arithmetic operations will result in overflow if the result is too large for the number of bits allocated to the data type in use. Such overflow can be checked or ignored for a given integral arithmetic operation using the checked and unchecked keywords. If the expression is a constant expression using checked, an error would be generated at compile time.
Here's a simple example to illustrate these operators:
using System;public class Class1{ public static void Main(string[] args) { short a = 10000, b = 10000; short d = unchecked((short)(10000*10000)); Console.WriteLine(d= + d); short c = (short)(a*b); Console.WriteLine(c= + c); short e = checked((short)(a*b)); Console.WriteLine(e= + e); }}
In this code, the unchecked operator circumvents the compile time error that would otherwise be caused by the following statement:
short d = unchecked((short)(10000*10000));
The next expression is unchecked by default, so the value will silently overflow:
short c = (short)(a*b);
We can force the expression to be checked for overflow at run time with the checked operator:
short e = checked((short)(a*b));
When run, assigning the first to values to d & c will silently overflow with a value of -7936, but when attempting to multiply the value for e using checked(),the program will throw a System.OverflowException
d
c
e
checked(),
System.OverflowException
Note: You can also control whether to check for arithmetic overflow in a block of code by using the command line compiler switch (/checked) or directly in Visual Studio on a per project basis.
This operator determines whether the type of the object on the left hand side matches the type specified on the right:
if (objReference is SomeClass) ...
In the following example, the CheckType() method prints a message describing the type of the argument passed to it:
CheckType()
using System;public class ShowTypes{ public static void Main(string[] args) { CheckType (5); CheckType (10f); CheckType ("Hello"); } private static void CheckType (object obj) { if (obj is int) { Console.WriteLine("Integer parameter"); } else if (obj is float) { Console.WriteLine("Float parameter"); } else if (obj is string) { Console.WriteLine("String parameter"); } }}
Running this program produces the following output:
Integer parameter
Float parameter
String parameter
The sizeofoperator returns the size in bytes of the specified value type as illustrated by the following code:
using System;public class Size{ public static void Main() { unsafe { Console.WriteLine("The size of short is {0}.", sizeof(short)); Console.WriteLine("The size of int is {0}.", sizeof(int)); Console.WriteLine("The size of double is {0}.",sizeof(double)); } }}
Note that the code containing the sizeof operator has been placed in an unsafe block. This is because the sizeof operator is considered an unsafe operation due to its accessing memory directly. For more information on unsafe code, see Safe And Unsafe Code.
The typeofoperator returns the type of the class passed to it as aSystem.Type object. The GetType() method is related, and returns the run-time type of a class or an expression. typeof and GetType() can be used in conjunction with reflection to find information about an object dynamically, as in the following example:
System.Type
GetType()
using System;using System.Reflection;public class Customer{ string name; public string Name { set { name = value; } get { return name; } }}public class TypeTest{ public static void Main() { Type typeObj = typeof(Customer); Console.WriteLine("The Class name is {0}", typeObj.FullName); // Or use the GetType() method: //Customer obj = new Customer(); //Type typeObj = obj.GetType(); Console.WriteLine("\nThe Class Members\n=================\n "); MemberInfo[] class_members = typeObj.GetMembers(); foreach (MemberInfo members in class_members) { Console.WriteLine(members.ToString()); } Console.WriteLine("\nThe Class Methods\n=================\n"); MethodInfo[] class_methods = typeObj.GetMethods(); foreach (MethodInfo methods in class_methods) { Console.WriteLine(methods.ToString()); } }}
If you run this program, it will produce output something like this:
The Class name is CustomerThe Class Members=================Int32 GetHashCode()Boolean Equals(System.Object)System.String ToString()Void set_Name(System.String)System.String get_Name()System.Type GetType()Void .ctor()System.String NameThe Class Methods=================Int32 GetHashCode()Boolean Equals(System.Object)System.String ToString()Void set_Name(System.String)System.String get_Name()System.Type GetType()
This shows us the members that all classes inherit from System.Object, as well as revealing the way that C# represents get and set propertyaccessors as get_xxx() andset_xxx() methods internally.
accessors
get_xxx()
set_xxx()
In the next example, we useGetType() to find the type of an expression at run time:
using System;public class TypeTest{ public static void Main() { int radius = 8; Console.WriteLine("Calculated area is = {0}", radius * radius * System.Math.PI); Console.WriteLine("The result is of type {0}", (radius * radius * System.Math.PI).GetType()); }}
The output of this program tells us that the result is of type System.Double, which is chosen because
System.Math.PIis of this type.
System.Math.PI
Calculated area is = 201.061929829747 The result is of type System.Double
Flow control statements are very similar in both languages, but there are some minor differences to cover in this section.
Branching statements change the flow of program execution at run time according to certain conditions.
These are identical in both languages.
In both languages, theswitch statement provides conditional multiple branching operations. There is a difference though in that Java allows you to "fall through" a case and execute the next case unless you use a break statement at the end of the case. C# however requires the use of either a break or a goto statement at the end of each case, and if neither is present, the compiler produces the following error:
switch
break
goto
Control cannot fall through from one case label to another.
Beware though, that where a case doesn't specify any code to execute when that case is matched, control will fall through to the subsequent case. When usinggoto in a switch statement, we can only jump to another case block in the same switch. If we want to jump to thedefault case, we would use goto default; otherwise we'd use gotocase cond; wherecond is the matching condition of the case we wish to jump to. Another difference to Java's switch is that in Java, we can only switch on integer types, while C# lets us switch on a string variable.
default
gotocase cond;
cond
For example, the following would be valid in C#, but not in Java:
switch (args[0]){ case "copy": ... break; case "move": ... goto case "delete"; break; case "del": case "remove": case "delete": ... break; default: ... break;}
In Java, gotois a reserved keyword that is not implemented. However, we can use labeled statements with break or continue to achieve a similar purpose as goto
C# does allow thegoto statement to jump to a labeled statement. Note though that in order to jump to a particular label, thegoto statement must be within the scope of the label. In other words, goto may not be used to jump into a statement block (although it can jump out of one), to jump out of a class, or to exit the finally block in try...catchstatements. Be aware though that goto is discouraged in most cases, as it contravenes good object-oriented programming practice.
Looping statements repeat a specified block of code until a given condition is met.
The syntax and operation of for loops is the same in both languages:
for (initialization; condition; expression) statement;
C# introduces a new loop type called the foreach loop (similar to Visual Basic's For Each). The foreach loop allows iterating through each item in a container class (ex: arrays) that supports theIEnumerable interface. The following code illustrates the use of the foreach statement to output the contents of an array:
foreach
IEnumerable
public static void Main(){ int[] arr1= new int[] {1,2,3,4,5,6}; foreach ( int i in arr1) { Console.WriteLine("Value is {0}", i); }}
We look at arrays in C# in more detail in the Arrays in C# section.
The syntax and operation ofwhile and do...while statements are the same in both the languages:
while
do...while
while (condition){ //statements}As usual, don't forget the trailing ; in do...while loops:do{ //statements}while(condition);
Modifiers are pretty much the same as those in Java, with several small differences that we will cover here. Each member of a class or the class itself may be declared with an access modifier to define the scope of permitted access. Classes that are not declared inside other classes can only specify the public or internal modifiers, while nested classes, like other class members, can specify any of the following five:
public
protected
private
internal
protectedinternal
A public modifier makes the member available anywhere both inside and outside the class. A protected modifier indicates that access is limited to within the containing class or classes derived from it. A private modifier means that access is only possible from within the containing type.
An internal item may only be accessed within the current assembly. An assembly in the .NET world equates roughly to Java's JAR file; it represents the building blocks from which other programs can be constructed.
A protected internal item is visible only to the current assembly or types derived from the containing class.
In C#, the default access modifier is private, while Java's default is package scope.
A class with the sealed modifier on its class declaration can be thought of as directly opposite to an abstract class – it cannot be inherited. We might mark a class as sealed to prevent other classes overriding its functionality. Naturally, a sealed class cannot be abstract. Also note that structs are implicitly sealed; therefore, they cannot be inherited. The sealed modifier is equivalent to marking a class with the final keyword in Java.
To define a constant in C#, we use the const or readonly modifier in place of Java's final keyword. The distinguishing factor between the two modifiers in C# is that const items are dealt with at compile time, while readonly fields are set up at runtime. This can allow us to modify the expression used to determine the value of a readonly field at runtime.
This means that assignment to readonly fields may occur in the class constructor as well as in the declaration. For example, the following class declares a readonly variable called IntegerVariable that is initialized in the class constructor:
IntegerVariable
using System;public class ReadOnlyClass{ private readonly int IntegerConstant; public ReadOnlyClass () { IntegerConstant = 5; }
// We get a compile time error if we try to set the value of the readonly // class variable outside of the constructor public int IntMember { set { IntegerConstant = value; } get { return IntegerConstant; } } public static void Main(string[] args) { ReadOnlyClass obj= new ReadOnlyClass(); // We cannot perform this operation on a readonly field obj.IntMember = 100; Console.WriteLine("Value of IntegerConstant field is {0}", obj.IntMember); }}
Note that if a readonly modifier is applied to a static field, it should be initialized in the static constructor of the class.
Every C# application must contain one, and only one, Main() method specifying where program execution is to begin. Note that in C#, we capitalizeMain() while Java uses lowercasemain().
Main()
main().
Main() can only return int or void, and has an optional string array argument to represent command line parameters:
void
static int Main (string[] args){ ... return 0;}
The string array parameter that contains any command-line arguments passed in works just as in Java. Thus, args[0] specifies the first command-line parameter, args[1] denotes the second parameter,
args[0]
, args[1]
and so on. Unlike C++, theargs array does not contain the name of the EXE file.
args
When you pass parameters to a method, they may be passed by value or by reference. Value parameters simply take the value of any variable for use in the method, and hence the variable value in the calling code is not affected by actions performed on the parameters within a method.
Reference parameters on the other hand point to a variable declared in the calling code, and thus methods will modify the contents of that variable when passed by reference.
In both Java and C#, method parameters that refer to an object are always passed by reference, while primitive data type parameters are passed by value.
In C#, all parameters are passed by value by default. To pass by reference, we need to specify one of the keywords ref or out. The difference between these two keywords is in the parameter initialization. A ref parameter must be initialized before use, while an out parameter does not have to be explicitly initialized before being passed and any previous value is ignored.
Be aware that when reference types are used as parameters for a method, the reference is itself passed by value. However, the reference still points to the same object in memory, and so changes made to the object's properties will persist after the method exits. But, as the reference is itself passed by value, should it be changed inside the method to point to a different object, or even a new one, the reference would be restored to point to the original object once the method completes, even if the original object were unassigned
We specify this keyword on a parameter when we want the called method to permanently change the value of variables used as parameters. What happens is that rather than passing the value of a variable used in the call, a reference to the variable itself is passed. The method then works on the reference, so that changes to the parameter during the method's execution are persisted to the original variable used as a parameter to the method.
The following code illustrates this in the Add() method, where the secondint parameter is passed by reference with theref keyword:
Add()
ref
using System;public class RefClass{ public static void Main(string[] args) { int total = 20; Console.WriteLine("Original value of 'total': {0}", total); // Call the Add method Add (10, ref total); Console.WriteLine("Value after Add() call: {0}", total); } public static void Add (int i, ref int result) { result += i; }}
The output of this simple example demonstrates that changes made to the result parameter are reflected in the variable, total, used in the Add()call:
Original value of 'total': 20 Value after Add() call: 30
This is because the result parameter references the actual memory location occupied by the total variable in the calling code. Be aware that a property of a class is not a variable, and cannot be used directly as a ref parameter.
Note that the ref keyword must precede the parameter when the method is called, as well as in the method declaration.
The outkeyword has a very similar effect to the ref keyword, and modifications made to a parameter declared using out will be visible outside the method. The two differences fromref are that any initial value of an outparameter is ignored within the method, and secondly that anout parameter must be assigned to during the method:
out
using System;public class OutClass{ public static void Main(string[] args) { int total = 20; Console.WriteLine("Original value of 'total': {0}", total); Add (33, 77, out total); Console.WriteLine("Value after Add() call: {0}", total); } public static void Add (int i, int j, out int result) { // The following line would cause a compile error // Console.WriteLine("Initial value inside method: {0}", result); result = i + j; }}
In this case, the third parameter to the Add() method is declared with theout keyword, and calls to the method also require theout keyword for that parameter. The output will be:
Original value of 'total': 20 Value after Add() call: 110
So, to sum up, use theref keyword when you want a method to modify an existing variable, and use the out keyword to return a value produced inside the method. It is generally used in conjunction with the method's return value when the method produces more than one result value for the calling code.
C# allows us to send a variable number of parameters to a method by specifying theparams keyword when the method is declared. The argument list can contain regular parameters also, but note that the parameter declared with the params keyword must come last. It takes the form of a variable length array, and there can be only one params parameter per method.
params
When the compiler tries to resolve a method call, it looks for a method whose argument list matches the method called. If no method overload that matches the argument list can be found, but there is a matching version with aparams parameter of the appropriate type, then that method will be called, placing the extra arguments in an array.
The following example demonstrates this idea:
using System;public class ParamsClass{ public static void Main(string[] args) { Average ("List One", 5,10,15); Average ("List Two", 5,10,15,20,25,30); } public static void Average (string title, params int[] values) { int Sum = 0; Console.Write("Average of {0}: ", title); for (int i = 0; i < values.Length; i++) { Sum += values[i]; Console.Write(values[i] + ", "); } Console.WriteLine(": {0}", (float)Sum/values.Length); }}
In the above example, the method Average is declared with a params parameter of type integer array, letting us call it with any number of arguments. The output is shown here:
Average of List One: 5, 10, 15, : 10 Average of List Two: 5, 10, 15, 20, 25, 30, : 17.5
Note that we can specify aparams parameter of type Object if we wish to allow indeterminate parameters of different types.
In C#, a property is a named member of a class, struct, or interface offering a neat way to access private fields through what are called the get and set accessor methods.
The following code snippet declares a property called Species for the classAnimal, which abstracts access to the private variable called name
Species
Animal
name
public class Animal{ private string name; public string Species { get { return name; } set { name = value; } }}
Often, the property will have the same name as the internal member that it accesses, but with a capital initial letter (such as Name in the above case) or the internal member will have an _ prefix. Also, note the implicit parameter called value used in the set accessor – this has the type of the underlying member variable.
Name
Accessors are in fact represented internally as get_X() and set_X() methods in order to maintain compatibility with the .NET languages which do not support accessors (as shown in the screenshot in the typeOfand GetType() section earlier in this document). Once a property is defined, it's then very easy to get or set its value:
typeOf
Animal animal = new Animal()// Set the propertyanimal.Species = "Lion";// Get the property valuestring str = animal.Species;
If property only has a get accessor, it is a read-only property. If it only has a set accessor, it is a write- only property. If it has both, it is a read-write property.
C# supports thestruct keyword, another item that originates in C but is not available in Java. You can think of a structas a lightweight class. It can contain constructors, constants, fields, methods, properties, indexers, operators, and nested types in much the same way as a class. structsdiffer from classes in that they cannot be abstract and do not support implementation inheritance. The important difference with a class is that structs are value types, while classes are reference types. There are some differences in the way constructors work for structs. In particular, the compiler always supplies a default no-parameter constructor, which you are not permitted to replace.
structs
In the following example, we initialize a struct with the newkeyword and also by initializing the members of an instance:
new
using System; public struct CustomerStruct { public int ID; public string name; public CustomerStruct(int customerID, string customerName) { ID = customerID; name = customerName; } } class TestClass { public static void Main(string[] args) { // Declare a CustomerStruct using the default constructor CustomerStruct customer = new CustomerStruct(); Console.WriteLine("Struct values before initialization"); Console.WriteLine("ID = {0}, Name = {1}", customer.ID, customer.name); customer.ID = 100; customer.name = "Robert"; Console.WriteLine("Struct values after initialization"); Console.WriteLine("ID = {0}, Name = {1}", customer.ID, customer.name); } }
When we compile and run the above code, its output shows that struct variables are initialized by default. The int variable is initialized to 0, and the string variable to an empty string:
struct values before initialization
ID = 0, Name =
struct values after initialization
ID = 100, Name = Robert
Note that when we declarecustomer using the alternate notation,CustomerStruct customer, its member variables would not be initialized, and trying to use them before setting them to values would generate a compile time error.
customer
CustomerStruct that I will cover in this section.
A one-dimensional array stores a fixed number of items in a linear fashion, requiring just a single index value to identify any one item.
In C#, the square brackets in the array declaration must follow the data type, and may not appear after the variable name as permitted in Java. Thus, an array of type integers is declared using following syntax:
int[] MyArray;
and the following declaration is invalid in C#:
int MyArray[];
Once we have declared an array, we use the new keyword to set its size, just as in Java:
int[] MyArray; // declares the array reference MyArray = new int[5]; // creates a 5 element integer array
We then access elements in a one-dimensional array using identical syntax to Java, noting that C# array indices are also zero-based:
MyArray [4] // accesses the last element in the array
Array elements may be initialized at creation using the same syntax as in Java:
MyArray = new int[5] {1, 2, 3, 4, 5};
Unlike Java, the number of initializers must match the array size exactly.
We can use this feature to declare and initialize a C# array in a single line:
int[] TaxRates = {0, 20, 23, 40, 50};
This syntax creates an array of size equal to the number of initializers.
The other way to initialize an array in C# is to use the foreach loop. The loop below sets each element of an array to zero:
int[] MyLittleArray = new int[5]; foreach (int i in MyLittleArray) { MyLittleArray[i] = 0; }
Both C# and Java support creating jagged, or non-rectangular, arrays, where each row contains a different number of columns. For instance, the following jagged array has four entries in the first row, and three in the second:
int[][] JaggedArray = new int[2][]; JaggedArray[0] = new int[4]; JaggedArray[1] = new int[3];
C# allows us to create regular multi-dimensional arrays that can be thought of as a matrix of values of the same type. While both Java and C# support jagged arrays, C# also supports multi-dimensional arrays or arrays of arrays. We'll look at jagged arrays in a moment.
We declare a multi-dimensional rectangular array using following syntax:
int[,] My2DIntArray; float[,,,] My4DFloatArray;
whereMy2DintArray is the name by which every element can be accessed.
My2DintArray
Note that the lineint[][] My2DintArray; has a different meaning in C#, as we shall see shortly.
int[][] My2DintArray;
Once declared, we allocate memory to the array like so:
int[,] My2DIntArray; // declares array reference My2DIntArray = new int[5,4]; // allocates space for 5x4 integers
Elements of the array are then accessed using the following syntax:
My2DIntArray [4,3] = 906;
As arrays are zero-based, this sets the element in the fifth column of the fourth row – the bottom right hand corner – to 906.
Multi-dimensional arrays may be created, set up, and initialized in a single line by any of the following methods:
int[,] intArray = { {1,2,3}, {4,5,6} }; int[,] intArray = new int [2,3] { {1,2,3}, {4,5,6} }; int[,] intArray = new int [,] { {1,2,3}, {4,5,6} };
All the elements of an array may be initialized using a nested loop as shown here:
int[,] intArray = new int[5,4]; for(int i=0;i<5;i++) { for(int j=0;i<4;j++) { intArray[i,j] = j; } }
In .NET, arrays are implemented as instances of the System.Array class. This class provides several useful methods, such asSort() and Reverse().
System.Array
Sort()
Reverse().
The following program demonstrates how easy these methods are to work with. First, we reverse the elements of an array using theReverse()method of the Array class, and then we sort them with the Sort() method:
Reverse()
using System; public class ArrayMethods { public static void Main() { // Create string array of size 5 string[] EmployeeNames = new string[5]; Console.WriteLine("Enter five employee names:"); // Read 5 employee names from user for(int i=0;i<EmployeeNames.Length;i++) { EmployeeNames[i]= Console.ReadLine(); } // Print the array in original order Console.WriteLine("\n** Original Array **"); foreach(string EmployeeName in EmployeeNames) { Console.Write("{0} ", EmployeeName); } //print the array in reverse order. Console.WriteLine("\n\n** Values in Reverse Order **"); System.Array.Reverse(EmployeeNames); foreach(string EmployeeName in EmployeeNames) { Console.Write("{0} ", EmployeeName); } //print the array in sorted order. Console.WriteLine("\n\n** Values in Sorted Order **"); System.Array.Sort(EmployeeNames); foreach(string EmployeeName in EmployeeNames) { Console.Write("{0} ", EmployeeName); } } }
Here's some typical output for this program:
Enter five employee names: Luca Angie Brian Kent Beatriz ** Original Array ** Luca Angie Brian Kent Beatriz ** Values in Reverse Order ** Beatriz Kent Brian Angie Luca ** Values in Sorted Order ** Angie Beatriz Brian Kent Luca
We can extend the functionality of an existing class by creating a new class that derives from the existing class. The derived class inherits the properties of the base class, and we can add or override methods and properties as required.
In C# both inheritance and interface implementation are defined by the :operator, equivalent to extends and implements in Java. Note that the base class should always be leftmost in the class declaration.
:
Like Java, C# does not support multiple inheritance, meaning that classes can't inherit from more than one class. We can however use interfaces for that purpose in the same way as in Java, as we'll see in the next section.
The following code defines a class called Point with two private member variablesx, y representing the position of the point. These variables are accessed through properties calledX and Y respectively:
Point
x
y
X
Y
public class Point { private int x, y; public Point() { x = 0; y = 0; } public int X { get { return x; } set { x = value; } } public int Y { get { return y; } set { y = value; } } }
We would derive a new class, called ColorPoint say, from the Point class like so:
ColorPoint
public class ColorPoint : Point
ColorPointthen inherits all the fields and methods of the base class, to which we can add new ones to provide extra features in the derived class according to our needs. In this case, we add a private
member and accessors to add color to the point:
using System.Drawing; public class ColorPoint : Point { private Color screenColor; public ColorPoint() { screenColor = Color.Red; } public Color ScreenColor { get { return screenColor; } set { screenColor = value; } } }
Note that the constructor of the derived class implicitly calls theconstructor for the base class (or thesuperclass in Java terminology). In inheritance, all base class constructors are called before the derived class's constructors in the order that the classes appear in the class hierarchy.
As in Java, we can't use a reference to a base class to access the members and methods of a derived class even if the base class reference may contain a valid reference to an object of the derived type.
We can reference a derived class with a reference to the derived type implicitly:
ColorPoint clrpt = new ColorPoint(); Point pt = clrpt;
In this code, the base class reference, pt, contains a copy of theclrpt reference.
pt
clrpt
We can access base class members in a subclass even when those base members are overridden in the superclass using the base keyword. For instance, we could create a derived class which contains a method with the same signature as in the base class. If we prefaced that method with the new keyword, we indicate that this is an all-new method belonging to the derived class. We could still provide a method for accessing the original method in the base class with the base keyword.
base
For instance, say our basePoint class had a method called invert(),which swaps the x and y coordinates over. We could provide a substitute for this method in our derivedColorPoint class with code like this:
invert(),
public new void invert() { int holding = X; X = Y; Y = holding; screenColor = Color.Gray; }
As you can see, this method swaps x and y, and then sets the point's color to gray. We could provide access to the base implementation for this method by creating another method inColorPoint such as this one:
public void baseInvert() { base.invert(); }
We would then invoke the base method on a ColorPoint object by calling thebaseInvert() method.
baseInvert()
ColorPoint clrpt = new ColorPoint(); clrpt.baseInvert();
Remember that we would get the same effect if we assigned a reference to the base class to an instance of ColorPoint, and then accessed its methods:
Point pt = clrpt; pt.invert();
Base class objects are always constructed before any deriving class. Thus the constructor for the base class is executed before the constructor of the derived class. If the base class has more than one constructor, the derived class can decide the constructor to be called. For example, we could modify our Point class to add a second constructor:
public class Point { private int x, y; public Point() { x = 0; y = 0; } public Point(int x, int y) { this.x = x; this.y = y; } }
We could then change theColorPoint class to use a particular one of the available constructors using the base keyword:
public class ColorPoint : Point { private Color color; public ColorPoint(int x, int y) : base (x, y) { color = Color.Red; } }
In Java, this functionality is implemented using the super keyword.
super
A derived class may override the method of a base class by providing a new implementation for the declared method. An important distinction between Java and C# is that by default Java methods are marked as virtual, while in C# methods must be explicitly marked as virtual using the virtual modifier. Property accessors as well as methods can be overridden in much the same way.
virtual
A method that is to be overridden in a derived class is declared with thevirtual modifier. In a derived class, the overridden method is declared using the override modifier.
override
The overridemodifier denotes a method or a property of a derived class that replaces one with the same name and signature in the base class. The base method, which is to be overridden must be declared asvirtual, abstract, oroverride: it is not possible to override a non-virtual or static method in this way – see the next heading for that case. Both the overridden and the overriding method or property must have the same access level modifiers.
abstract
The following example shows a virtual method called StepUp that is overridden in a derived class with the override modifier:
StepUp
using System; public class CountClass { public int count; // Constructor public CountClass(int startValue) { count = startValue; } public virtual int StepUp() { return ++count; } } class Count100Class : CountClass { // Constructor public Count100Class(int x) : base(x) { } public override int StepUp() { return ((base.count) + 100); } public static void Main() { CountClass counter = new CountClass(10); CountClass bigCounter = new Count100Class(10); Console.WriteLine("Value of count in base class = {0}", counter.StepUp()); Console.WriteLine("Value of count in derived class = {0}", bigCounter.StepUp()); } }
When we run this code, we see that the derived class's constructor uses the method body given in the base class, letting us initialize the count member without duplicating that code. Here's the output we get:
Value of count in base class = 11 Value of count in derived class = 110
An abstract class declares one or more methods or properties as abstract. Such methods do not have an implementation provided in the class that declares them, although an abstract class can also contain non-abstract methods, that is, methods for which an implementation has been provided. An abstract class cannot be instantiated directly, but only as a derived class. Such derived classes must provide implementations for all abstract methods and properties, using theoverride keyword, unless the derived member is itself declared abstract.
The following example declares an abstract Employee class. We also create a derived class called Manager that provides an implementation of the abstract show() method defined in the Employee class:
Employee
Manager
show()
using System; public abstract class Employee { // abstract show method public abstract void show(); } // Manager class extends Employee public class Manager: Employee { string name; public Manager(string name) { this.name = name; } //override the show method public override void show() { Console.WriteLine("Name : " + name); } } public class CreateManager { public static void Main(string[] args) { // Create instance of Manager and assign it to an Employee reference Employee temp = new Manager("John Chapman"); // Call show method. This will call the show method of the Manager class temp.show(); } }
This code invokes the implementation of show() provided by theManager class, and prints the employee name on screen.
An interface is a sort of "skeleton class", containing method signatures but no method implementations. In this way, interfaces are like abstract classes that contain only abstract methods. C# interfaces are very similar to Java interfaces, and work in very much the same way.
All the members of an interface are public by definition, and an interface cannot contain constants, fields (private data members), constructors, destructors, or any type of static member. The compiler will generate an error if any modifier is specified for the members of an interface.
We can derive classes from an interface in order to implement that interface. Such derived classes must provide implementations for all the interface's methods unless the derived class is declared abstract.
An interface is declared identically to Java. In an interface definition, a property indicates only its type, and whether it is read-only, write-only, or read/write by get and set keywords alone. The interface below declares one read-only property:
public interface IMethodInterface { // method signatures void MethodA(); int MethodB(float parameter1, bool parameter2); // properties int ReadOnlyProperty { get; } }
A class can inherit from this interface, using a colon in place of Java's implements keyword. The implementing class must provide definitions for all methods, and any required property accessors:
public class InterfaceImplementation : IMethodInterface { // fields private int count = 0; private int ID; // implement methods defined in interface public void MethodA() { ... } public int MethodB(float parameter1, bool parameter2) { ... return integerVariable; } public int ReadOnlyProperty { get { return count; } } // add extra methods if required }
A class may implement multiple interfaces using the following syntax:
public class MyClass : interfacename1, interfacename2, interfacename3, as inIMethodInterface.MethodA.
IMethodInterface.MethodA
Like C++, C# allows us to overload operators for use on our own classes. This makes it possible for a user-defined data type to look as natural and be as logical to use as a fundamental data type. For example, we might create a new data type called Complex to represent a complex number, and provide methods that perform mathematical operations on such numbers using the standard arithmetic operators, such as using the + operator to add two complex numbers.
To overload an operator, we write a function that has the name operator followed by the symbol for the operator to be overloaded. For instance, this is how we would overload + operator:
public static complex operator+(complex lhs, complex rhs)
All operator overloads are static methods of the class. Also be aware that if you overload the equality
(==) operator, you must overload the inequality operator (!=) as well.
The full list of operators that can be overloaded is:
+, -, !, ~, ++, --, true, false
+, -, *, /, %, &, |, ^, <<, >>, ==, !=, >, <, >=, <=
The next example creates a Complex class that overloads the + and - operators:
using System; public class complex { private float real; private float img; public complex(float p, float q) { real = p; img = q; } public complex() { real = 0; img = 0; } public void Print() { Console.WriteLine("{0} + {1}i", real, img); } // Overloading '+' operator public static complex operator+(complex lhs, complex rhs) { complex sum = new complex(); sum.real = lhs.real + rhs.real; sum.img = lhs.img + rhs.img; return (sum); } // Overloading '-' operator public static complex operator-(complex lhs, complex rhs) { complex result = new complex(); result.real = lhs.real - rhs.real; result.img = lhs.img - rhs.img; return (result); } }
This class allows us to create and manipulate two complex numbers with code such as this:
using System; public class ComplexClass { public static void Main(string[] args) { // Set up complex numbers complex A = new complex(10.5f,12.5f); complex B = new complex(8.0f,4.5f); complex C; // Print object A and B Console.Write("Complex Number A: "); A.Print(); Console.Write("Complex Number B: "); B.Print(); // Add A and B, print result C = A + B; Console.Write("\nA + B = "); C.Print(); // Subtract A and B, print result C = A - B; Console.Write("A - B = "); C.Print(); } }
As the program demonstrates, we can now use the plus and minus operators on objects belonging to our complex class quite intuitively. Here is the output we would get:
Complex Number A: 10.5 + 12.5i Complex Number B: 8 + 4.5i A + B = 18.5 + 17i A - B = 2.5 + 8i
Java does not support operator overloading, although internally it overloads the+ operator for string concatenation.
+
Exception handling in C# is very similar to that of Java.
Whenever something goes critically wrong during execution of a program, the .NET runtime creates an Exception object detailing the error. In .NET, Exception is the base class for all the exception classes. There are two categories of exceptions that derive from the Exception base class,System.SystemException andSystem.ApplicationException. All types in the System namespace derive from System.SystemException while user-defined exceptions should derive fromSystem.ApplicationException to differentiate between runtime and application errors. Some common Systemexceptions include:
System.SystemException
System.ApplicationException
IndexOutOfRangeException
NullReferenceException
ArithmeticException
FormatException
As in Java, when we have code that is liable to cause an exception, we place that code within a try block. One or more catchblocks immediately after provide the error handling, and we can also use a finally block for any code that we want to execute whether an exception is thrown or not.
try
catch
finally
Note that when using multiple catch blocks, the exceptions caught must be placed in order of increasing generality as only the first catch block that matches the thrown exception will be executed. The C# compiler will enforce this while the Java compiler will not.
Also, C# doesn't require an argument for a catch block as Java does; in the absence of an argument, the catch block applies to any Exception class.
For example, while reading from a file, you may encounter a FileNotFoundExceptionor an IOException, and we would want to place the more specific FileNotFoundException handler first:
FileNotFoundException
IOException
try { // Code to open and read a file } catch (FileNotFoundException fe) { // Handle file not found exception first } catch (IOException ioe) { // Now handle any other IO exceptions } catch { // This block will catch all other exceptions } finally { // Executed whether or not an exception occurs, often to release resources }
We can create our own exception classes by deriving from Exception. For example, the following code creates an InvalidDepartmentException class that we might throw if, say, the department given for a new employee record is invalid. The class constructor for our user-defined exception calls the base class constructor using thebase keyword, sending an appropriate message:
Exception
public class InvalidDepartmentException : System.Exception { public InvalidDepartmentException(string Department) : base( "Invalid Department: " + Department){ } }
We could then throw an exception of this type with code such as this:
if (!(Department == "Sales" | Department == "Marketing")) { throw new InvalidDepartmentException(Department); }
Note that C# does not support checked exceptions. In Java these are declared using thethrows keyword, to specify that a method may throw a particular type of exception which must be handled by the calling code. For more information on why C# doesnt support checked exceptions, read this interview with Anders Hejlsberg.
throws
Indexers provide a way to access a class or struct in the same way as an array. For example, we may have a class that represents a single department in our company. The class could contain the names of all employees in the department, and indexers could allow us to access these names like this:
myDepartment[0] = "Fred"; myDepartment[1] = "Barney";
and so on. Indexers are enabled by defining a property with the following signature in the class definition:
public type this [int index]
We then provideget and set methods as for a normal property, and it is these accessors that specify what internal member is referred to when the indexer is used.
get
set
In the following simple example, we create a class called Department that uses indexers to access the employees in that department, internally represented as an array of strings:
Department
using System; public class Department { private string name; private const int MAX_EMPLOYEES = 10; private string [] employees = new string [MAX_EMPLOYEES]; public Department(string deptName) { name = deptName; } public string this [int index] { get { if (index >= 0 && index < MAX_EMPLOYEES) { return employees[index]; } else { throw new IndexOutOfRangeException(); //return "Error"; } } set { if (index >= 0 && index < MAX_EMPLOYEES) { employees[index] = value; } else { throw new IndexOutOfRangeException(); //return "Error"; } } } // Other methods and properties as usual }
We can then create an instance of this class and access it as shown below:
using System; public class SalesDept { public static void Main(string[] args) { Department sales = new Department("Sales"); sales[0] = "Nikki"; sales[1] = "Becky"; Console.WriteLine("The sales team is {0} and {1}", sales[0], sales[1]); } }
For more information on indexers, see Indexer.
C# introduces a new mechanism for adding declarative information about types called attributes. Extra information about a type is placed inside declarative tags that precede the type definition. The examples below show how you can leverage .NET Framework attributes to decorate a class or method.
In the example below, theGetTime method is marked as an XML Web service by adding the WebMethod attribute.
GetTime
WebMethod
using System; using System.Web.Services; public class Utilities : WebService { [WebMethod] public string GetTime() { return DateTime.Now.ToShortTimeString(); } }
By adding theWebMethod attribute, the .NET Framework will now automatically take care of the XML/SOAP interchange necessary to call this function. Calling this web service retrieves the following value:
<?xml version="1.0" encoding="utf-8" ?> <string xmlns="">7:26 PM</string>
In the example below, theEmployee class is marked as Serializableby adding the Serializable() attribute. While the salary field is marked as public, it will not be serialized as it is marked with the NonSerialized()attribute.
Serializable
Serializable()
NonSerialized()
using System; [Serializable()] public class Employee { public int ID; public string Name; [NonSerialized()] public int Salary; }
For information on creating custom attributes, see Creating Custom Attributes.
Languages such as C++, Pascal, and others support the concept of function pointers that permit us to choose which function we wish to call at run time.
Java does not provide any construct with the functionality of a function pointer, but C# does, through the System.Delegate class. A delegate instance encapsulates a method that is a callable entity.
System.Delegate
For instance methods, the delegate consists of an instance of the containing class and a method on the instance. For static methods, a callable entity consists of a class and a static method on the class. Thus, a delegate may be used to invoke a function of any object, and delegates are object-oriented, type- safe, and secure.
There are three steps when defining and using delegates:
We declare a delegate with the following syntax:
delegate void myDelegate();
This delegate can then be used to reference any function that returns void and does not take any arguments.
Similarly, to create a delegate for any function that takes a string parameter and returns a long, we would use the following syntax:
delegate long myDelegate(string mystring);
We could then assign this delegate to any method with this signature, like so:
myDelegate operation = new myDelegate(methodName);
Delegate objects are immutable, that is, the signature they match cannot be changed once set. However, we can point to another method as long as both have the same signature. For instance:
delegate myDelegate(int a, int b) myDelegate operation = new myDelegate(Add); operation = new myDelegate(Multiply);
Here, we reassignoperation to a new delegate object so thatoperation then invokes the Multiplymethod. We can only do this if both Add() andMultiply() have the same signature.
operation
Multiply
Multiply()
Invoking a delegate is fairly straightforward, simply substituting the name of the delegate variable for the method name:
delegate long myDelegate(int i, int j); myDelegate operation = new myDelegate(Add); long lresult = operation(10, 20);
This invokes theAdd method with values 10 and 20, and returns a long result that is assigned to variable lresult.
Add
lresult
Let's create a quick program to illustrate the creation, instantiation, and invocation of a delegate:
using System; public class DelegateClass { delegate long myDelegate (int i, int j); public static void Main(string[] args) { myDelegate operation = new myDelegate(MathClass.Add); Console.WriteLine("Call to Add method through delegate"); long l = operation(10, 20); Console.WriteLine("Sum of 10 and 20 is " + l); Console.WriteLine("Call to Multiply method thru delegate"); operation = new myDelegate(MathClass.Multiply); l = operation(1639, 1525); Console.WriteLine("1639 multiplied by 1525 equals " + l); } } public class MathClass { public static long Add (int i, int j) { return (i+j); } public static long Multiply (int i, int j) { return (i*j); } }
The output we will get is this:
Call to Add method through delegate Sum of 10 and 20 is 30 Call to Multiply method through delegate 1639 multiplied by 1525 equals 2499475
As mentioned earlier, a delegate instance must contain an object reference. We got round this in the example above by declaring our methods as static, which means there's no need to specify an object reference ourselves. If a delegate refers to an instance method however, the object reference must be given like so:
MathClass obj = new MathClass(); myDelegate operation = new myDelegate(obj.Power);
where Power is an instance method of MathClass. So, if MathClass's methods were not declared as
static, we would invoke them through a delegate like so:
using System; public class DelegateClass { delegate long myDelegate(int i, int j); public static void Main(string[] args) { MathClass mathObj = new MathClass(); myDelegate operation = new myDelegate(mathObj.Add); Console.WriteLine("Call to Add method through delegate"); long l = operation(10, 20); Console.WriteLine("Sum of 10 and 20 is " + l); Console.WriteLine("Call to Multiply method thru delegate"); operation = new myDelegate(mathObj.Multiply); l = operation(1639, 1525); Console.WriteLine("1639 multiplied by 1525 equals " + l); } }
If you run this program, you'll get the same output as previously, when the methods were declared as static.
The .NET Framework also leverages delegates extensively for event handling tasks like a button click event in a Windows or Web application. While event handling in Java is typically done by implementing custom listener classes, C# developers can take advantage of delegates for event handling. An event is declared like a field with a delegate type, except that the keyword event precedes the event declaration. Events are typically declared public, but any accessibility modifier is allowed. The code below shows the declaration of a delegate and event.
event
public delegate void CustomEventHandler(object sender, EventArgs e); public event CustomEventHandler CustomEvent;
Event delegates are multicast, which means that they can hold references to more than one event handling method. A delegate acts as an event dispatcher for the class that raises the event by maintaining a list of registered event handlers for the event. The example below shows how you can subscribe multiple functions to an event. The class EventClass contains the delegate, the event, and a method to invoke the event. Note that invoking an event can only be done from within the class that declared the event. The class TestClass can then subscribe to the event using the += operator and unsubscribe using the -= operator. When the InvokeEvent() method is called, it fires the event and any functions that have subscribed to the event will fire synchronously as shown in the code below.
using System; class TestClass { static void Main(string[] args) { EventClass myEventClass = new EventClass(); // Associate the handler with the events myEventClass.CustomEvent += new EventClass.CustomEventHandler(CustomEvent1); myEventClass.CustomEvent += new EventClass.CustomEventHandler(CustomEvent2); myEventClass.InvokeEvent(); myEventClass.CustomEvent -= new EventClass.CustomEventHandler(CustomEvent2); myEventClass.InvokeEvent(); } private static void CustomEvent1(object sender, EventArgs e) { Console.WriteLine("Fire Event 1"); } private static void CustomEvent2(object sender, EventArgs e) { Console.WriteLine("Fire Event 2"); } } public class EventClass { public delegate void CustomEventHandler(object sender, EventArgs e); //Declare the event using the delegate datatype public event CustomEventHandler CustomEvent; public void InvokeEvent() { CustomEvent(this, EventArgs.Empty); } }
The output of this program is:
Fire Event 1 Fire Event 2 Fire Event 1
In C and C++, many objects require the programmer to allocate them resources once declared, before the objects may be safely used. Releasing these resources back to the free memory pool once the object is done with. For more information on Garbage Collection, see Garbage Collection: Automatic Memory Management in the Microsoft .NET Framework, and Garbage Collection—Part 2: Automatic Memory Management in the Microsoft .NET Framework.
A particularly interesting feature of C# is its support for non-type safe code. Normally, the CLR takes on the responsibility overseeing the behavior of IL (Intermediate Language) code, and prevents any questionable operations. However, there are times when we wish to directly access low-level functionality such as Win32 API calls, and we are permitted to do this, as long as we take responsibility for ensuring such code operates correctly. Such code must be placed inside unsafe blocks in our source code.
C# code that makes low-level API calls, uses pointer arithmetic, or carries out some other unsavory operation, has to be placed inside blocks marked with the unsafe keyword. Any of the following can be marked as unsafe:
The following example demonstrates the use of unsafe in all three of the above situations:
using System; class UnsafeClass { unsafe static void PointyMethod() { int i=10; int *p = &i; Console.WriteLine("*p = " + *p); string address = "Pointer p = " + int.Format((int) p, "X"); Console.WriteLine(address); } static void StillPointy() { int i=10; unsafe { int *p = &i; Console.WriteLine("*p = " + *p); string address = "Pointer p = " + int.Format((int) p, "X"); Console.WriteLine(address); } } static void Main() { PointyMethod(); StillPointy(); } }
In this code, the entirePointyMethod() method is marked unsafe because the method declares and uses pointers. TheStillPointy()method marks a block of code as unsafe as this block once again uses pointers.
PointyMethod()
StillPointy()
For more information about unsafe code, see Unsafe at the Limit.
In safe code, the garbage collector is quite free to move an object during its lifetime in its mission to organize and condense free resources. However, if our code uses pointers, this behavior could easily cause unexpected results, so we can instruct the garbage collector not to move certain objects using the fixed keyword.
fixed
The following code shows the fixed keyword being used to ensure that an array is not moved by the system during the execution of a block of code in the PointyMethod() method. Note that fixed is only used within unsafe code:
PointyMethod
public class FixedClass { public static void PointyMethod(char[] array) { unsafe { fixed (char *p = array) { for (int i=0; i<array.Length; i++) { Console.Write(*(p+i)); } } } } static void Main () { char[] array = { 'H', 'e', 'l', 'l', 'o' }; PointyMethod(array); } }
Though Microsoft and other vendors have introduced many languages for the .NET platform, C# is a language that closely resembles Java and is very well suited to developers wishing to migrate from J2EE to the .NET platform.
This document has compared and contrasted the two languages. In many ways, C# has the power of C++, the elegance of Java, and the ease of development of Visual Basic, and I hope that this document has demonstrated this.
To learn how to get started creating your first C# application, visit the Java Resource Center Getting Started Page. | http://msdn.microsoft.com/en-us/vstudio/aa700844.aspx | crawl-002 | refinedweb | 10,639 | 51.38 |
The objective of this post is to explain how to read and display an image with Python and OpenCV.
Introduction
The objective of this post is to explain how to read and display an image with Python and OpenCV. This will be a very simple introductory code.
You can check here how to install OpenCV on Windows. The easiest way is installing it from the pre-built binaries, as indicated here. Don’t forget to install Numpy, which is also needed to support the OpenCV functionality. The easiest way is doing it via pip, with the following command:
pip install numpy
The code
First of all, we need to import the cv2 module, which we will use to access the image processing functionality.
Then, to read an image, we simply call the imread function of the cv2 module. This will return an image as a numpy ndarray. We can confirm this by calling the type function and passing as input the object returned by the imread function.
Note that if the file is not in Python’s working directory, we need to specify the full path, as indicated bellow. In my case, I was reading an image from my desktop.
image = cv2.imread('C:/Users/N/Desktop/Test.jpg') print type(image)
Then, to display the image we read with the previous function, we call the imshow function of the cv2 module. This function will display the image in a window and it receives as input the name of the window and the image we previously got with the imread function [1].
cv2.imshow('Test image',image)
Then, we will call the waitKey function, which will wait for a keyboard event and receives as input a delay in milliseconds [2]. If we pass a value lesser or equal than 0, it will wait indefinitely for a key event [2]. So, the execution of our program will block here until we press a key.
After the user presses a key, we will assume the window with the image should be destroyed. To to so, we can call the destroyAllWindows function, which will destroy all the windows previously created [2]. Note that if we want to destroy a specific window, we can call the destroyWindow function, which receives as input the name of the window we want to destroy [2]. This second function is useful if we create multiple windows.
cv2.waitKey(0) cv2.destroyAllWindows()
Check the full code bellow.
import cv2 image = cv2.imread('C:/Users/N/Desktop/Test.jpg') print type(image) cv2.imshow('Test image',image) cv2.waitKey(0) cv2.destroyAllWindows()
Running the code
After finishing the code, just run it on IDLE. A window with the image should pop, as indicated in figure 1. Also, in IDLE’s console, the type printed for our image should be of type ndarray, as expected.
To finish the program, just press any key on your keyboard and the window should be destroyed.
Figure 1 – Output of the image reading program.
Related content
- Difference between cv and cv2 modules
- OpenCV getting started with images
- OpenCV Windows setup
- OpenCV documentation index
References
[1]
[2]
Technical details
- Python version: 2.7.8
- OpenCV version: 3.2.0 | https://techtutorialsx.com/2017/04/30/python-opencv-reading-and-displaying-an-image/ | CC-MAIN-2017-26 | refinedweb | 533 | 64.2 |
All Boards
Latest Questions
Likes
Tutorials & Best Practices
Beginners
My Accounts
View Console
Billing
Messages
Free Trial
Free Trial
>
Alibaba Cloud Solutions
>
Security
>
Java security – keys
Reply
« Back to list
Ysera
Assistant Engineer
UID
634
Fans
0
Follows
0
44
Send
Reads:
38089
Replies:
0
Java security – keys
Created
#
More
Only view posts from moderator
Read in a reverse order
Posted time:May 5, 2017 14:34 PM
Save
Concept
The key is an indispensable part of the encryption algorithm and is vital to the security system. As its name suggests, the key is private and used to open the door to security. There are two key types: symmetric key and asymmetric key. The asymmetric key also contains a public key and a private key.
There is another concept associated with the key: the certificate. The certificate is primarily used to identify the key and the public key is usually transferred in the certificate.
In the Java security system, the key is implemented through the JCE algorithm package. The engine that operates on the key consists of two parts: the key generator and the key factory. The key generator can create a key, and the key factory packages and presents the key as output. For program writing, the creation of the key involves two steps: 1. Produce the key with the key generator; 2. Output the key from the key factory as a key specification or a set of bytecode.
Java implementation
Java encapsulates an interface for the key - Key. The asymmetric key contains PublicKey and PrivateKey, both of which implement this interface. From the output result of the previous "secure provider framework", we can see that different secure providers provide a lot of key generation algorithms, the typical ones being Sun's DSA and RSA and JCE's Diffie-Hellman algorithm.
Key generation and expression
Java provides two generator classes for key generation: KeyPairGenerator and KeyGenerator. The former is used to generate asymmetric keys, and the latter is used to generate symmetric keys. The corresponding key representations are: the KeyFactory class represents asymmetric keys, and the SecretKeyFactory class represents symmetric keys.
Let's look at a DSA example:
import java.security.KeyFactory;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.NoSuchAlgorithmException;
import java.security.spec.DSAPrivateKeySpec;
import java.security.spec.InvalidKeySpecException;
import javax.crypto.KeyGenerator;
import javax.crypto.SecretKey;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.DESKeySpec;
public class KeyTest {
public static void main(String[] args) {
try {
generateKeyPair();
generateKey();
} catch (InvalidKeySpecException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
}
public static void generateKeyPair() throws NoSuchAlgorithmException, InvalidKeySpecException {
KeyPairGenerator kpg = KeyPairGenerator.getInstance("DSA");
kpg.initialize(512);
KeyPair kp = kpg.generateKeyPair();
System.out.println(kpg.getProvider());
System.out.println(kpg.getAlgorithm());
KeyFactory kf = KeyFactory.getInstance("DSA");
DSAPrivateKeySpec dsaPKS = kf.getKeySpec(kp.getPrivate(), DSAPrivateKeySpec.class);
System.out.println("\tDSA param G:" + dsaPKS.getG());
System.out.println("\tDSA param P:" + dsaPKS.getP());
System.out.println("\tDSA param Q:" + dsaPKS.getQ());
System.out.println("\tDSA param X:" + dsaPKS.getX());
}
public static void generateKey() throws NoSuchAlgorithmException, InvalidKeySpecException {
KeyGenerator kg = KeyGenerator.getInstance("DES");
SecretKey key = kg.generateKey();
System.out.println(kg.getProvider());
System.out.println(kg.getAlgorithm());
SecretKeyFactory skf = SecretKeyFactory.getInstance("DES");
DESKeySpec desKS = (DESKeySpec) skf.getKeySpec(key, DESKeySpec.class);
System.out.println("\tDES key bytes size:" + desKS.getKey().length);
}
}
The code architecture design class diagram for key generation is as follows:
KeyGenerator is similar to KPG, only in that KPG generates KeyPair, while KG generates the SecretKey.
Key management
About certificates
It is difficult to consider where to cover certificates, so I will not discuss it separately. Taking into account that the certificate can verify the legitimacy of the key, this seems like an appropriate place.
The public key should be transferred to the corresponding requestor in the asymmetric key scenario. How can we ensure that this public key is the one I provide to you, instead of a false one provided by someone else? The encryption signature will be required for this transmission. Then the cycle is initiated, and the certificate is introduced – the certificate can ensure that the content is consistent with that of the source, that is to say, the certificate can guarantee that the content sent to the requestor indeed belongs to the content owner.
Not everyone can issue a certificate. The certificate must be issued by a fair entity (CA, certificate authority) and its legitimacy is also verified. The certificate contains three aspects of content:
1. The entity name, that is, the certificate holder.
2. The public key associated with the subject.
3. The digital signature used to verify the certificate information. The certificate is signed by the certificate issuer.
Java has a corresponding Certificate class for certificate-related tasks. Since the certificate is not the focus of our discussion here, and Java itself does not have complete support for certificates, let's digress from certificate content back to the key transmission topic.
KeyStore
The KeyStore class of Java is responsible for key management. KeyStore has a setKeyEntry () method. The general procedure is that KeyStore sets the key as a key entry, and then saves the key as a .keystore file using the store () method. The consumer gets the .keystore file, reads the key entry using the load () method, and then uses it.
If the secret key is asymmetric, write the key entry as follows:
public static void secretKeyStore() throws KeyStoreException, NoSuchAlgorithmException,
CertificateException, IOException {
char[] password = "123456".toCharArray();
String fileName = System.getProperty("user.home") + File.separator + ".keystore";
FileInputStream fis = new FileInputStream(fileName);
KeyStore ks = KeyStore.getInstance("jceks");
ks.load(fis, password);
KeyGenerator kg = KeyGenerator.getInstance("DES");
SecretKey key = kg.generateKey();
ks.setKeyEntry("myKeyEntry", key, password, null);
FileOutputStream fos = new FileOutputStream(fileName);
ks.store(fos, password);
System.out.println("store key in " + fileName);
}
Some concepts involved here:
• KeyStore: The place to manage and store the key and certificate. Java key management is built based on the KeyStore.
• Key entry: The KeyStore stores key entries. A key entry either saves an asymmetric key pair or a secret key. If a key pair is saved, a certificate chain may also be saved. The first certificate of the certificate chain contains the public key.
• Alias: Each key can have an alias and can be understood as the name of the key entry.
• Distinguished name: The distinguished name of the entity in the KeyStore is its complete subset of X.500, such as a DN is CN = Yu Jia, OU = ALI, O = ALIBABA, L = HZ, ST = ZJ, C = CN.
• Certificate entry: It contains only one public key certificate. The certificate is stored, instead of the certificate chain.
• JKS, JCEKS, PKCS12: The KeyStore algorithm. The default algorithm in Java is JKS. It can only be used to save the private key. If you want to save the secret key of a symmetric key, you need to use JCEKS, which is the KeyStore ks = KeyStore.getInstance ("jceks"); mentioned in the code above. You can change the default algorithm by modifying keystore.type = JCEKS in the java.security file.
Keytool
Something still seems missing, because the above code cannot be executed if put into the main function. A question also comes up: why is it loaded first for creating a KeyStore?
Look at the source code in the store () method of KeyStore:
public final void store(OutputStream stream, char[] password)
throws KeyStoreException, IOException, NoSuchAlgorithmException,
CertificateException
{
if (!initialized) {
throw new KeyStoreException("Uninitialized keystore");
}
keyStoreSpi.engineStore(stream, password);
}
Uninitialized Keystore will throw a KeyStoreException. The initialization action is done in the load() method. That is strange: is the first KeyStore touch'd casually in the system directory?
This introduces the keytool which is a management tool provided by JRE to facilitate the management of KeyStores. The keytool is a command line interface and used to manage the KeyStore. Details about various specific parameters can be found through man keytool or keytool -help.
Here I list how my program initializes a KeyStore:
1. I first generate a key entry with an alias of changed. It adopts the RSA asymmetric algorithm
zunyuanjys-MacBook-Air:~ zunyuan.jy$ keytool -genkey -alias changedi -keyalg RSA
Enter the KeyStore password:
Enter the new password again:
What is your last name and first name?
[Unknown]: Yu Jia
What is the name of your organization unit?
[Unknown]: ALI
What is the name of your organization?
[Unknown]: ALIBABA
What is the name of the city or region you are in?
[Unknown]: HZ
What is the name of the province/municipality/autonomous region you are in?
[Unknown]: ZJ
What is the double-letter code of the country/region of the unit?
[Unknown]: CN
CN=Yu Jia, OU=ALI, O=ALIBABA, L=HZ, ST=ZJ, C=CN. Is it correct?
[No]: Y
Enter the <changedi> key password
(If it matches the KeyStore password, press Enter):
Enter the new password again:
2. After entering the DN following the prompts, the KeyStore will be ready. You can check it.
zunyuanjys-MacBook-Air:~ zunyuan.jy$ keytool -list
Enter the KeyStore password:
KeyStore type: JKS
KeyStore provider: SUN
3. We can see that the store is still of the JKS type and needs to be changed to JCEKS. To do this, perform the following:
zunyuanjys-MacBook-Air:~ zunyuan.jy$ keytool -keypasswd -alias changedi -storetype jceks
Enter the KeyStore password:
Enter the <changedi> key password
New <changedi> key password:
Re-enter the new <changedi> key password:
4. Select the storetype for the list. Because what was just modified is the password, but the core objective is to change the KeyStore type.
zunyuanjys-MacBook-Air:~ zunyuan.jy$ keytool -list -storetype jceks
Enter the KeyStore password:
KeyStore type: JCEKS
KeyStore provider: SunJCE
5. Run the program and write a secret key for the symmetric key into it as a key entry of this KeyStore. Then list it.
zunyuanjys-MacBook-Air:~ zunyuan.jy$ keytool -list -storetype jceks
Enter the KeyStore password:
KeyStore type: JCEKS
KeyStore provider: SunJCE
Your KeyStore contains a total of 2 entries.
changedi, 2016-7-7, PrivateKeyEntry,
Certificate fingerprint (SHA1): 76:C8:CE:EA:4C:29:6D:0E:FF:8C:02:BE:F4:F4:55:97:63:1F:C8:26
mykeyentry, 2016-7-7, SecretKeyEntry,
In fact, in the above example, you can specify the storetype as JCEKS during the creation of the first key entry. Here I just show how to switch the KeyStore type. In addition, when the RSA private key entry has no certificate specified, a self-signed certificate will also be generated.
Going back to the code, let's look at the details of setKeyEntry:
public final void setKeyEntry(String alias, Key key, char[] password,
Certificate[] chain)
throws KeyStoreException
{
if (!initialized) {
throw new KeyStoreException("Uninitialized keystore");
}
if ((key instanceof PrivateKey) &&
(chain == null || chain.length == 0)) {
throw new IllegalArgumentException("Private key must be "
+ "accompanied by certificate "
+ "chain");
}
keyStoreSpi.engineSetKeyEntry(alias, key, password, chain);
}
It can be seen that a certificate chain is required for the production of asymmetric keys. Otherwise, an exception will be thrown. Taking this situation into account, we generally resort to keytool for non-enterprise-level security scenarios.
Likes
0
Latest likes:
Replies
Reply
« Back to list
General
You need to login to reply the post, Please
or
Latest likes | https://www.alibabacloud.com/forum/read-911?desc=1&uid=634 | CC-MAIN-2020-40 | refinedweb | 1,853 | 50.33 |
Description
Installed version 1.6.1 from scratch. But when I try to visit any page in a web-browser, I get "CGI application did not return complete set of HTTP headers" error. If I add the "standard" print "Content-Type: text/html\n\n" to my moin.cgi file then I can see the pages.
Then on the top of every page I see that log goes to stdout: 182152 INFO logging initialized Status: 200 OK Content-Type: text/html; charset=utf-8 Vary: Cookie,User-Agent,Accept-Language.
If I roll back to 1.5.8, then everything's fine.
Steps to reproduce
- install 1.6.1 with IIS 6.0
Example
Component selection
- logging / cgi
Details
Workaround
Go to moin.cgi and change it like this:
import logging from MoinMoin.server.server_cgi import CgiConfig, run class Config(CgiConfig): # This is important, IIS does not like stuff on stderr: loglevel_stderr = None # But you still want to have moin's log output somewhere, # thus we write it to an own log file. Make sure that this # log file can be created/updated by the moin process! loglevel_file = logging.INFO # Server name - used to create .log and .prof files name = 'moin' logPath = '/path/to/logdir/%s.log' % name
Discussion
Not clear wheter this is a IIS bug or a moin bug. I could not find official cgi specs telling something about stderr.
Plan
- Priority:
- Assigned to:
- Status: see workaround above. in 1.7 there will be a very configurable logging system. | http://www.moinmo.in/MoinMoinBugs/LogGoesToStdoutOnIIS | crawl-003 | refinedweb | 251 | 77.33 |
Steps to install DNX (.NET Execution environment)
The original instructions are here, but the steps below are organized better and fill some gaps. Except for the .NET 4.5.2, PowerShell 4.0 and VC++ redistributable, you don't need admin access for DNX itself as it deploys to your user folder.
Upgrade to .NET 4.5.2 (or higher) if you don't have it - Note needs about 3GB on C drive so you may have to make room first. This is a pre-requisite for PowerShell.
Reboot!
Upgrade PowerShell to latest version (4.0) if you don't have it already
(Select x64 version)
Reboot!
Install the Visual C++ 2013 redistributable package from here (in case you don't have it already). Install both the x86 and x64 packages. This is pre-requisite for Kestrel (the development web server).
Run PowerShell
Check powershell version:
$PSVersionTable.PSVersion (should be 4.0)
Exit PowerShell
Issue the following command:
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString(''))}"
This will install DNVM (.NET Version Manager) to your user folder.
Close command prompt window.
Open new Command Prompt
dnvm
You should see help information for the DNVM command
If you use a Proxy Server within your environment (at work for example), pay attention to the next 2 lines, otherwise skip over it.
setx http_proxy
Close prompt and open a new one - check that http_proxy environment variable has been set correctly.
dnvm install latest -Unstable -Persistent (this will pull the latest runtime bits and install to your user folder)
dnvm list (this will show you all the installed runtimes and a * will be next to the current default)
Close prompt and open a new one
Type the following command:
dnx
You should see help information for DNX
Occasionally you can run the following command to get the latest DNX and DNVM bits:
dnvm upgrade
dnvm update-self
Finally we need to tell the package management system to get packages from the DEV package server.
Update c:\users\<user>\AppData\Roaming\Nuget\NuGet.config (this determines which source DNU RESTORE will use) with these contents. You can merge the existing contents.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="AspNetVNext" value="" />
<add key="nuget.org" value="" />
</packageSources>
<disabledPackageSources />
<activePackageSource>
<add key="AspNetVNext" value="" />
</activePackageSource>
</configuration>
Another option is use this DNU command instead of 'dnu restore':
dnu restore -s
Note DNU is the .NET Utility which includes functionality to restore packages, build packages, compile code etc.
To test the installation:
Create a folder c:\temp\ConsoleApp
In this folder, create a file Program.cs with these contents:
using System;
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
}
}
Also create a file project.json with these contents:
{
"dependencies": {
},
"commands": {
"ConsoleApp": "ConsoleApp"
},
"frameworks": {
"dnx451": { },
"dnxcore50": {
"dependencies": {
"System.Console": "4.0.0-beta-*"
}
}
}
}
Now using a Command Prompt, go to the folder containing the files and execute the following command:
dnu restore (this will pull all the required packages into your user folder)
dnx ConsoleApp
It should print Hello World!
You can also try the simple CookieSample web app from here.
Note for web apps you have to use the following command to run the development web server (kestrel):
dnx kestrel (assuming you are currently in the folder containing the project.json file)
Optional Step
If you would like to have a convenient way of editing your source files without installing Visual Studio, then check out OmniSharp. It is a free plug-in to some popular source editors that supports ASP.NET 5 development. The Atom editor works pretty well on Windows, is lightweight and deploys to the user folder (no admin permissions required). Also install the OmniSharp plugs for Atom.
After you install Atom, close the editor and open a new command prompt and type this command if you use a Proxy Server in your environment:
apm config set https-proxy
Then issue these commands to install the Atom plug-ins:
apm install autocomplete-plus
apm install linter
apm install omnisharp-atom
Then run Atom and follow the instructions here to use the intellisense etc.
so perfect ! but my OS is win 8.1 ,you give the link for powershell only support win 8,so don't fix dvnm!!!
Do you any idea????finally,thakns for your help!
so perfect ! but my OS is win 8.1 ,you give the link for powershell 4.0 version only support win 8,so don't fix dvnm!!!
Do you have any idea????finally,thanks for your help!
PowerShell 4.0 is already included in Win8.1 so you should be able to skip this step.
yeah,I solve the problem !! you are right , My idea is fault,thanks !
Epic instructions!!!
I think you have a typo error at end of article where the command is issued to run the code: `dnx . ConsoleApp`
Removing the dot and the code ran as expected.
Great work. Thank you.
The '.' is required – it is the path of the application and '.' means current folder.
Has anyone got this to work? Docker run –rm -it windowsservercore cmd
follow the instructions and when I get to typing dnx, it is telling me it cant load dnx win32 dll.
I am running Windows 10 and VS 2015 and ended up with Missing References in a new ASP.NET 5 MVC project.
I am running Windows 7 and VS 2015. I had to remove the "required" period to get the Hello World application to run.
dnx ConsoleApp
Thanks – that was due to a recent RC1 update change. I've fixed the blog post. | https://blogs.msdn.microsoft.com/sujitdmello/2015/04/23/step-by-step-installation-instructions-for-getting-dnx-on-your-windows-machine/ | CC-MAIN-2019-30 | refinedweb | 936 | 66.23 |
Copy formatted org-mode text from Emacs to other applications
Posted June 16, 2016 at 11:46 AM | categories: rtf, emacs | tags: | View Comments
I do a lot of writing in org-mode and I thought it would be great if I could copy text from an org-file and paste it with formatting into other applications, e.g. Word, Gmail, etc…. Curiosity got the better of me and I wondered how this is done in other applications. It works by creating a Rich Text Format version of what you want to copy and then putting that on the clipboard. It isn't quite enough to just copy it, it needs to go in the clipboard as an RTF datatype. On Mac OSX I used pbcopy to make that happen.
One simple strategy to do this from org-mode is to generate HTML by export, and then convert it to RTF with a utility, e.g. textutil. For example like this.
(defun formatted-copy () "Export region to HTML, and copy it to the clipboard." (interactive) (save-window-excursion (let* ((buf (org-export-to-buffer 'html "*Formatted Copy*" nil nil t t)) (html (with-current-buffer buf (buffer-string)))) (with-current-buffer buf (shell-command-on-region (point-min) (point-max) "textutil -stdin -format html -convert rtf -stdout | pbcopy")) (kill-buffer buf)))) (global-set-key (kbd "H-w") 'formatted-copy)
This works well for everything but equations and images. Citations leave a bit to be desired, but improving this is still a challenge.
Let us try this on some text. Some bold, italic, underline,
struck and
verbatim text to copy. Here are some example Formulas: H2O ionizes to form H+. We simply must have an equation: \(e^{i\pi} + 1 = 0\) 1. We should also have a citation kitchin-2015-examp and multiple citations kitchin-2016-autom-data,kitchin-2015-data-surfac-scien 2.
A code block:
import pycse.orgmode as org import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 60, 500) plt.figure(figsize=(4, 2)) plt.plot(np.exp(-0.1 * x) * np.cos(x), np.exp(-0.1 * x) * np.sin(x)) org.figure(plt.savefig('spiral.png'), caption='A spiral.', attributes=[['org', ':width 100']]) print('') org.table([['H1', 'H2'], None, [1, 2], [2, 4]], caption='A simple table') print('') org.result(6 * 7)
Figure 1: A spiral.
42
In summary, this simple approach to generating RTF from exported HTML works really well for the simplest markups. To improve on getting figures in, getting cross-references, captions, proper references, etc… will require a more sophisticated export approach, and probably one that exports RTF directly. That is a big challenge for another day!
Bibliography
- [kitchin-2015-examp] Kitchin, Examples of Effective Data Sharing in Scientific Publishing, ACS Catalysis, 5(6), 3894-3899 (2015).
- [kitchin-2016-autom-data] "Kitchin, Van Gulick & Zilinski, Automating Data Sharing Through Authoring Tools, "International Journal on Digital Libraries", , 1-6 (2016).
- [kitchin-2015-data-surfac-scien] "John Kitchin", Data Sharing in Surface Science, "Surface Science ", N/A, in press (2015).
Footnotes:
Copyright (C) 2016 by John Kitchin. See the License for information about copying.
Org-mode version = 8.3.4 | http://kitchingroup.cheme.cmu.edu/blog/2016/06/16/Copy-formatted-org-mode-text-from-Emacs-to-other-applications/ | CC-MAIN-2017-22 | refinedweb | 530 | 58.99 |
Hello people,
I have a list of SHP files that I want to rename. First, I want to replace the "." from the shp to "_", and then rename the file. As you can see in the image below:
This the code that I am using:
import arcpy, os
arcpy.env.workspace=r'H:\WWF\0_GeoDatabase_GIS\6_Especies\Conejo\Seguimiento\Cuadriculas\WWF\Andujar\2012\RAMON_1'
for filename in arcpy.ListFeatureClasses():
a=filename.replace('.', '_')
a="shp_"+a
arcpy.Rename_management(filename, a)
print "ok"
The problem is that when I am trying to open the SHP file it does not work. I am getting the following error message:
Does anyone knows how to fix the problem...
Thanks for your help
The code seems to work... it does not give any error message in PyScripter...But when I am trying to add the shp to ArcMAP it gives the following error:
Your trying to hard
import arcpy
arcpy.env.worcspace = "Your workspace"
for file in arcpy.ListFeatureClasses():
a = "shp_" + file.replace('.','_')
arcpy.Rename_management(file,a) | https://community.esri.com/t5/python-questions/rename-a-list-of-shp-files-in-python/m-p/549756 | CC-MAIN-2021-21 | refinedweb | 172 | 60.41 |
Hi there,
I am following tutorials to learn about OSM and how to make some maps.
Currently I am using this one, but in the instructions it uses shapefiles as a datasource just in a normal file directory. I don’t want to do that, I want the file source to be from my database in PostGIS, where I have stored my OSM data. What do I need to do to make it do that? I tried:
lyr = Layer('Geometry from PostGIS')
lyr.datasource = PostGIS(host='localhost',user='postgres',password='',dbname='your_postgis_database',table='your_table')
which I copied off one of the wiki pages. Obviously I amended details as appropriate, but it just said layer not recognised. I wondered if there way something I had to do or tell it to do to get into the database?
In the literature I have read, it says that using PostGIS is one of the most common ways of doing what I want to do, but I cannot seem to find any code to use!
Thanks in Advance,
Tracey
asked
20 Feb '12, 13:09
lgxtlm
11●1●1●3
accept rate:
0%
Do you have solve your problem?
An important choice you're also faced with initially is... Do you want to get the default OpenStreetMap stylesheets working with Mapnik? We have a lot of documentation about how to do this. Unfortunately it's a little more fiddly than some of the basic "Getting started" Mapnik tutorials, and you are more restricted as to which Mapnik version you work with and what your data source types are.
To get the OSM default stylesheet running you must work with PostGIS (as well as some shapefiles for zoomed out world boundaries and coastline), and you must populate the database using osm2pgsql which is also governed by a matching style configuration file. Hopefully a good guide to all this is the Mapnik wiki page. Having done the database loading, there's some python scripts provided: generate_image.py and generate_tiles.py Playing around with these, you'll get some control within python, but it's not quite the same as the from-scratch tutorial you link to.
generate_image.py
generate_tiles.py
...but maybe there's a better half-way-house tutorial involving osm2pgsql and building style rules in python. (Anyone know?)
answered
27 Feb '12, 15:59
Harry Wood
9.3k●25●87●128
accept rate:
14%
edited
27 Feb '12, 16:00
Please, give us more of your code. Here is one of my script that was working with mapnik two years ago:
def addLayer(m, name, dbname, table, symbolizer):
# style creation
s = mapnik.Style()
r = mapnik.Rule()
r.symbols.append(symbolizer)
s.rules.append(r)
m.append_style(name,s)
# layer creation
layer = mapnik.Layer(name, "+proj=latlong +datum=WGS84")
layer.datasource = mapnik.PostGIS(host='localhost',port='5434',user='mapnik',password='mapnik',dbname=dbname,table=table)
layer.styles.append(name)
m.layers.append(layer)
return layer
# usage sample
m = mapnik.Map(1000,1500)
m.background = mapnik.Color('white')
projection = "+proj=latlong +datum=WGS84"
addLayer(m, name='data', dbname='osm',
table='(select way from planet_osm_line) as roads',
symbolizer=mapnik.LineSymbolizer(mapnik.Color('rgb(0,0,0)'),2))
answered
27 Feb '12, 10:47:
mapnik ×337
osm2pgsql ×248
postgis ×132
postgres ×31
question asked: 20 Feb '12, 13:09
question was seen: 7,871 times
last updated: 01 Mar '12, 10:12
Extract all cities with lat and lon coords named in English
mapnik-german osm style, problem with views in postgres database
duplicate key value violates unique constraint "planet_osm_nodes_pkey"
OSM – PostGIS – Mapnik problem!
Mapnik error: column "generator:source" does not exist
column "int_tc_type" does not exist
How to setup Localised maps
How to configure postgresql for mapnik hourly updates?
How do you make the osm2pgsql diff imports run faster than molasses on postgres 8.4?
osm databases
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/10677/setting-postgis-as-a-datasource?sort=active | CC-MAIN-2021-49 | refinedweb | 647 | 54.63 |
With the release of Roslyn, the .NET cloud compiler, we have our first view of the features that are likely to make it into C# 6.0 - why so little publicity over a major event?
Roslyn is the completely reworked .NET compiler and one of its promised features is that it makes it easier to add language features. As part of the release of the Roslyn source code to the .NET Foundation we also have a list of "language features". These list the features, all 35 of them, that have been or are being added to C# and to Visual Basic. Many of these new features are small modifications to things that already exist and the notes state that they are far from final and could be changed at any time. This is all the more likely given that Roslyn clearly does make language changes easier.
So what are the big new features?
To be honest - there aren't any.
Most of the changes that are listed are small and often just amount to what has come to be called syntactic sugar i.e. shorter ways of writing things that you can already do.
One nice change is to the way we can create auto-properties. You can now define read only auto-properties and you can initialize them. For example:
public int Y { get; } = y;
creates a "get-only" property Y and initializes it to y.
What is slightly stranger is the idea of an expression bodied member a mix of property initialization and lambda expressions. For example:
int X=>x*x;
defines a member which returns the square of x i.e.
int X {get {x*x;}};
The team have generally been busy implementing other variations on initializers. For example you can now initialize a Dictionary object
new Dictionary<string><int>{ ["x"]=3, ["y"]=7 };
and any object supporting a Indexer as a property (see: In search of default properties). You can also perform an index access using a property syntax. For example instead of:
indexObject["x"];
you can write
indexObject.$x
The new primary constructor provides another way to initialize objects. If you include something that looks like a constructor as part of the class definition, e.g.
public class Point(int x, int y){}
then private members are created for the parameters and automatically assigned and the example is equivalent to:
public class Point{ private int x,y; public Point(int x, int y){ this.x=x; this.y=y; }}
That is the primary constructor defines a set of private members that are initialized when the object is created. Once you have a primary constructor all other constructors have to call it using the form this(primary parameters) e.g. this(x,y) so that it can initialize the private members.
A more surprising new initializer is for an event:
new Button {OnClick+=eventHandler};
this almost feels like JavaScript!
Talking of JavaScript (or C) is also a new semicolon operator which lets you string together as many expressions as you like and the value of the expression is the value of the last one. The purpose of this addition is to let you have multiple steps in parts of the language that only allow a single expression to be entered. For example:
var x=(var y=rand(10);y*y);
sets x to the square of a random number.
Another new expression feature is the ability to declare variables within an expression. For example
var y= (int x =3) * x;
sets y to 9.
There are some small but nice additions to the way literals can be used. There is now a binary literal:
int x=0b0111;
You can use digit separators for presentation and to make sure you counted the correct number of digits:
int x = 0xFF_00_FF_00;
You can easily create multiline string literals:
"Hello<newLine>World";
and string literals can incorporate variables using string interpolation:
"Hello \{record.name) how are you to day"
although this one is only a "maybe" so it might not make it into C#6.
Another nice extension is the ability to use static methods more easily. For example if you just want the square root and use some other math functions you currently have to write things like:
var a= Math.Sqrt(Math.Sin(0.5));
but now you can write
Using System.Math;...var a= Sqrt(Sin(0.5));
Using System.Math;
...var a= Sqrt(Sin(0.5));
Also see: C# Gets A New Operator - Safe Navigation
There are also some additions to exception handling. You can now use an await in a catch and finally statement. For example:
try{ do something}Catch { await cleanup();}Finally{ await cleanup();}
try{ do something
}Catch {
await cleanup();
}Finally{
}
Another nice extra for exception handling is the ability to use filters. For example
catch(exception e) if (e.count>5) { do something}
If the condition is true then the catch is entered otherwise the exception keeps going. This is better than handling the exception, doing the test in the catch block and then rethrowing the exception because it keeps the stack intact.
You can find a full list of the new features complete with their current status at the Rosyln site.
So what do you think of C#6?
It really does look like a lot of syntactic fiddling but mostly nice. Is this what the future holds for C#? Does it really miss nothing big that would make it a better language?
I think that C# has reach a level of maturity where it really has settled down and its features probably could not be frozen. This of course itn't true of its associated class library which is where most of the utility in modern languages lives.
Language feature implementation status
C# Gets A New Operator - Safe Navigation [ ... ]
Next week developers will gather in the Barbican Centre, London for the annual Software Design & Development Conference. The event features over 100 in-depth sessions on key software development t [ ... ] | http://www.i-programmer.info/news/89-net/7160-c-60-features.html | CC-MAIN-2016-22 | refinedweb | 998 | 64.1 |
Prerequisite
In order to have Telerik UI for ASP.NET AJAX running, you will need to have ASP.NET AJAX installed on your development/production machine.
Each installation package comes in three types:
MSI file for automatic installation;The Windows Installer Package (MSI) files are intended for easy and automatic installation of a product. The MSI installer unpacks the controls on your computer in a folder in your Program Files named Telerik. Additionally, the installer package adds the Telerik UI for ASP.NET AJAX help files to your VS.NET IDE and to your local copy of MSDN (if you have one installed). If selected in the Feature Selection screen, the installer automatically adds Telerik UI for ASP.NET AJAX Visual Studio Extensions and populates your Visual Studio ToolBox with the Telerik controls.
ZIP file for manual (advanced) installation;The ZIP is used for manual (advanced) installs and for upgrading/updating purposes. The folder structure is different from the Windows Installer (MSI) package. You need to be familiar with with ASP.NET, IIS, setting permissions and creating virtual folders. It is a common practice to unpack the manual installation ZIP file directly to inetpub/wwwroot. Use the ZIP file for deploying on shared hosting.
DLL files only (a.k.a. HOTFIX) for updating/upgrading a product to a newer version. This is a bare-bones upgrade option for our controls. The hotfix contain only the files that your Web Project needs to run correctly. Besides the assemblies it contains the latest JavaScript files and skins (if you've needed to use these as external files), the up-to-date RadEditor dialogs and RadSpell dictionaries.
The latest MSI and ZIP packages available for download already have all updates/HOTFIXES applied. There is no need to update them further.
If you already have Telerik controls installed by the MSI package, you can safely install an updated version - the installer will keep your existing installation. The new files will be placed in a separate folder and the new installation does not damage the common installer files. The new Telerik UI for ASP.NET AJAX installation may have an updated help too: it is registered with the correct namespace. You can see the updated Telerik UI for ASP.NET AJAX help in your help2 viewer. | http://www.telerik.com/help/aspnet-ajax/introduction-which-file-do-i-need-to-install.html | CC-MAIN-2015-18 | refinedweb | 381 | 58.89 |
Context manager for mocking/wrapping stdin/stdout/stderr
Project description
Current Development Version:
Most Recent Stable Release:
Info:
Have a CLI Python application?
Want to automate testing of the actual console input & output of your user-facing components?
stdio Manager can help.
While some functionality here is more or less duplicative of redirect_stdout and redirect_stderr in contextlib within the standard library, it provides (i) a much more concise way to mock both stdout and stderr at the same time, and (ii) a mechanism for mocking stdin, which is not available in contextlib.
First, install:
$ pip install stdio-mgr
Then use!
All of the below examples assume stdio_mgr has already been imported via:
from stdio_mgr import stdio_mgr
Mock stdout:
>>> with stdio_mgr() as (in_, out_, err_): ... print('foobar') ... out_cap = out_.getvalue() >>> out_cap 'foobar\n' >>> in_.closed and out_.closed and err_.closed True
By default print appends a newline after each argument, which is why out_cap is 'foobar\n' and not just 'foobar'.
As currently implemented, stdio_mgr closes all three mocked streams upon exiting the managed context.
Mock stderr:
>>> import warnings >>> with stdio_mgr() as (in_, out_, err_): ... warnings.warn("'foo' has no 'bar'") ... err_cap = err_.getvalue() >>> err_cap "...UserWarning: 'foo' has no 'bar'\n..."
Mock stdin:
The simulated user input has to be pre-loaded to the mocked stream. Be sure to include newlines in the input to correspond to each mocked Enter keypress! Otherwise, input will hang, waiting for a newline that will never come.
If the entirety of the input is known in advance, it can just be provided as an argument to stdio_mgr. Otherwise, .append() mocked input to in_ within the managed context as needed:
>>> with stdio_mgr('foobar\n') as (in_, out_, err_): ... print('baz') ... in_cap = input('??? ') ... ... _ = in_.append(in_cap[:3] + '\n') ... in_cap2 = input('??? ') ... ... out_cap = out_.getvalue() >>> in_cap 'foobar' >>> in_cap2 'foo' >>> out_cap 'baz\n??? foobar\n??? foo\n'
The _ = assignment suppresses printing of the return value from the in_.append() call–otherwise, it would be interleaved in out_cap, since this example is shown for an interactive context. For non-interactive execution, as with unittest, pytest, etc., these ‘muting’ assignments should not be necessary.
Both the '??? ' prompts for input and the mocked input strings are echoed to out_, mimicking what a CLI user would see.
A subtlety: While the trailing newline on, e.g., 'foobar\n' is stripped by input, it is retained in out_. This is because in_ tees the content read from it to out_ before that content is passed to input.
Want to modify internal print calls within a function or method?
In addition to mocking, stdio_mgr can also be used to wrap functions that directly output to stdout/stderr. A stdout example:
>>> def emboxen(func): ... def func_wrapper(s): ... from stdio_mgr import stdio_mgr ... ... with stdio_mgr() as (in_, out_, err_): ... func(s) ... content = out_.getvalue() ... ... max_len = max(map(len, content.splitlines())) ... fmt_str = '| {{: <{0}}} |\n'.format(max_len) ... ... newcontent = '=' * (max_len + 4) + '\n' ... for line in content.splitlines(): ... newcontent += fmt_str.format(line) ... newcontent += '=' * (max_len + 4) ... ... print(newcontent) ... ... return func_wrapper >>> @emboxen ... def testfunc(s): ... print(s) >>> testfunc("""\ ... Foo bar baz quux. ... Lorem ipsum dolor sit amet.""") =============================== | Foo bar baz quux. | | Lorem ipsum dolor sit amet. | ===============================
Available on PyPI (pip install stdio-mgr).
Source on GitHub. Bug reports and feature requests are welcomed at the Issues page there.
License: The MIT License. See LICENSE.txt for full license terms.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/stdio-mgr/ | CC-MAIN-2022-27 | refinedweb | 586 | 60.61 |
How can I find out what percentage of the vertical scrollbar a user has moved through at any given point?
It's easy enough to trap the 'onscroll' event to fire when the user scrolls down the page, but how do I find out within that event how far they have scrolled? In this case, the percentage particularly is what's important. I'm not particularly worried about a solution for IE6.
Do any of the major frameworks (Dojo, jQuery, Prototype, Mootools) expose this in a simple cross-browser compatible way?
Cheers,
If you're using Dojo, you can do the following:
var vp = dijit.getViewport(); return (vp.t / (document.documentElement.scrollHeight - vp.h));
Which will return a value between 0 and 1. | https://codedump.io/share/TTaVF3dAfJ49/1/cross-browser-method-to-determine-vertical-scroll-percentage-in-javascript | CC-MAIN-2017-43 | refinedweb | 123 | 65.62 |
.
What is GIO? GVFS? GnomeVFS?
GIO is an input/output framework. It provides a bunch of functionalities related to files, directories, removable media, networks, and other stuff that fall under the IO category. This article’s focus—file operations—is just one of its features.
GVFS complements GIO by letting it access remote/virtual filesystems (the VFS part of the name).
GnomeVFS is an older technology that GIO and GVFS together replace.
GIO is distributed as part of GLib, while GVFS sits in a separate package. GnomeVFS was part of GNOME and is being phased out.
GFile, the bread and butter of file operations
This is the file class in GIO. By itself GFile represents nothing more than a URI, but it lets you do all sorts of operations on a file or directory. Here’s a simple example:
>>> import gio >>> f = gio.File('.') >>> f.get_uri() >>> f.get_path() /home/user >>> f.delete() Traceback (most recent call last): ... gio.Error: Error removing file: Directory not empty
Check the documentation (C reference, Python reference) to see other operations you can do with GFile.
GFileInfo
GFileInfo represents various properties of a file or stream, such as size, permissions, etc. To obtain the GFileInfo of a file, call GFile’s
query_info method.
>>> f = gio.File('.') >>> info = f.query_info('standard::type,standard::size') >>> info.get_file_type() <enum G_FILE_TYPE_DIRECTORY of type GFileType> >>> info.get_size() 0L
query_info requires a list of attributes that you want to retrieve. Only the specified attributes are then populated into the GFileInfo. Multiple attributes can be queried by separating them with commas. Wildcards in the form of
* and
namespace::* are also accepted.
In this example, we query the file type and size (by the way, the size is always 0 for directories). The results are stored in the returned GFileInfo.
To retrieve a value from GFileInfo, call its
get_attribute_* methods (e.g.
get_attribute_uint32("standard::type")). Some attributes have special getters (e.g.
get_file_type).
A complete list of file attributes is available. The various attribute getter methods are detailed in the GFileInfo documentation (C reference, Python reference).
Example: getting parent directory
This is just a very simple example showing how to get the parent of a directory.
import gio current = gio.File('.') parent = current.get_parent()
Example: getting contents of a directory
This example shows the meat of the Files panel: displaying the subdirectories and files inside a directory. For each file, we also obtain its size.
import gio current = gio.File('.') subdirs = [] files = [] infos = current.enumerate_children( 'standard::name,standard::type,standard::size') for info in infos: child = current.get_child(info.get_name()) if info.get_file_type() == gio.FILE_TYPE_DIRECTORY: subdirs.append(child) else: files.append((child, info.get_size()))
Notice that
enumerate_children works similarly to
query_info, but it retrieves the properties of all the children of a directory.
What else?
That’s it, we’ve covered everything essential for a file browser. The Files panel just builds a GUI and a history feature on top.
I’ve written a small demonstration of a working GIO file browser; try giving it a remote location to browse (don’t forget to
gvfs-mount the filesystem first).
Great tutorial – very helpful. Thanks!
This is cool, how about if I want to open a zip (.jar) file and enumerate it’s children?
Heres what I tried:
That’s quite tricky; I don’t think GIO/GVFS supports opening zipped files as directories per se. What you can do is mount the zip file with GIO, and then access the file as if you’re accessing a remote GIO mount.
I’ll give it a try later. Might actually be a good topic for another post if I can get it working.
Nice Demonstration 🙂
Pingback: Mount berkas arsip dengan GIO « Dari Tiada Ke Tiada
Hi, I am trying to remove an attribute (“metadata::emblems”) using the call
mydir=gio.File("./tmp")
mydir.set_attribute("metadata::emblems", gio.FILE_ATTRIBUTE_TYPE_STRINGV, ["new", "hot"])
and it works. The problem is that I cannot find how to unset it. In the C call g_file_set_attribute you pass a NULL as the value, but in the python version I can’t pass a None vale. And if I pass an empty array the attribute is set to the empty array… Have you any hint? Thanks!
This does seem to be a PyGTK oversight—I can’t find a way to do it either. But the bigger problem is that unsetting an attribute is not documented even in the GIO docs.
You may want to file bug reports on both GLib and PyGTK. (And if you do, please share the links here.)
In the meantime, you can work around this by running
gvfs-set-attribute -t unset path metadata::emblemsor using ctypes.
I l-traced the gvfs-set-attribute command, and it seems that to remove the attribute (from C) you simply call the set_attribute() function passing a NULL pointer.
Unfortunately, it does not work in Python, because it refuses a “None” argument. I do not know how to use ctype… I sent a message to the pygtk-python-hacker list (awaiting moderation because I am not a subscriber).
If I solve something, I’ll post here. Spawning gvfs-set-attribute is a solution, but I need to spawn a process (I was trying to write a tool to keep emblems synchronized between teo rsync-ed trees), and it is not… elegant. | https://sjohannes.wordpress.com/2009/10/10/gio-tutorial-file-operations/ | CC-MAIN-2019-18 | refinedweb | 890 | 58.18 |
Created attachment 16622 [details]
debug trace after mounting with vers=3.0 and seal options
I have two file servers running Oracle Solaris 11.4 with the latest update applied (11.4.33.94.0). These servers claim support for SMBv2 and SMBv3. I am using the Oracle supplied SMB server (non Samba). I am running Solaris on x86_64 hardware and so this is not an endian related issue.
My two CentOS 8 servers running Samba SMB client libsmbclient.x86_64 (version 4.13.3-3.el8) can only negotiate SMBv2.1. Any attempt to negotiate SMBv3 (with or without encryption) results in the following error message:
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
While I appreciate it may be tempting to blame the Oracle SMB server implementation, I can connect successfully to these shares using MacOS and I have used wireshark to examine the SMB protocol negotiation and have determined that MacOS can successfully negotiate SMBv3 with encryption.
I have Samba debug trace files from the CentoOS Samba client side but lack adequate knowledge of the Samba component to interpret the debug traces. Any help in determining if this is a defect in the Samba client component would be appreciated
Can you get a wireshark trace on port 445 of the failed interaction. That should help track things down.
Also, try using command-line smbclient to connect with encryption and debug level 10. That uses the same internal code paths as libsmbclient and should be easier to reproduce.
Thanks !
Jeremy.
Created attachment 16623 [details]
tcpdump of port 445 on Solaris server
Perhaps I'm doing something wrong but I cannot reproduce the issue with:
smbclient //nonsuch/myshare --debuglevel=10 --encrypt --user=myuser
I am prompted for a password and it appears to work correctly. In the debug output I can see "negotiated dialect[SMB3_11] against server[10.45.1.72]" and can successfully browse the share using the command line.
The mount in fstab however still does not work.
With the following in fstab
//nonsuch/myshare/folder /mnt/nonsuch/folder cifs noauto,vers=3.11,seal,credentials=/root/creds_smb_library_core2 0 0
I get the following in dmesg:
[73279.063581] CIFS: Attempting to mount //nonsuch/myshare/folder
[73279.064792] CIFS: VFS: \\nonsuch Dialect not supported by server. Consider specifying vers=1.0 or vers=2.0 on mount for accessing older servers
[73279.066514] CIFS: VFS: cifs_mount failed w/return code = -95
With the following in fstab (which was what I was using when originally reporting this issue)
//nonsuch/myshare/folder /mnt/nonsuch/folder cifs noauto,vers=3.0,seal,credentials=/root/creds_smb_library_core2 0 0
I get the following in dmesg:
[73322.319649] CIFS: Attempting to mount //nonsuch/myshare/folder
[73322.483736] CIFS: VFS: \\nonsuch failed to connect to IPC (rc=-11)
[73322.582204] CIFS: VFS: session 00000000f5e01434 has no tcon available for a dfs referral request
[73322.583829] CIFS: VFS: cifs_mount failed w/return code = -2
and the attached interaction with port 445 in wireshark as captured by the Solaris server.
Oh, well in this case it's a problem with the Linux kernel SMB3 client, which isn't developed as part of Samba, it's separate. Looks like the Samba code is working fine. When you mentioned libsmbclient I assumed you were doing it via our libraries, but doing fstab and mount.cifs is the kernel client.
I'll re-assign to Steve French.
Apologies for my ignorance, my knowledge is a little sketchy on the behind the scenes and I mis-understood this fact. Does this mean that this bugzilla is not the appropriate one for logging this issue?
Well hopefully Steve will pick it up from here, but you can also email him directly at Steve French <smfrench@gmail.com>.
There isn't much we can see in the traces you provide except the following:
1) for the "445broken.pcap" wireshark trace we can see that the server hung up the session (probably server bug but hard to prove since the request they hung up on (SMB3 tree connect) is encrypted when you mount with "seal"). If you have access to that system and can reproduce it, it might be helpful to dump the decryption keys so we can see the failing request and make sure it is not malformed. See for details on how to dump decryption keys
2) for the debug trace you attached all we can see is the "EINVAL" being returned
[ 205.645184] CIFS: fs/cifs/connect.c: Received no data or error: 0
[ 205.645187] CIFS: fs/cifs/connect.c: cifs_reconnect: will not do DFS
failover: rc = -22
presumably the same issue (the server hung up due to a bug processing the tree connect request (the first encrypted request) when encryption is specified on mount).
Could you retry without specifying "seal" on mount and include (or send to my gmail) the wireshark traces? In addition the dynamic trace info may be useful:
1) In one process type "trace-cmd record -e cifs"
2) Run the test in another window
3) dump the dynamic trace info "trace-cmd show"
4) kill the trace from step 1
If it can not be tried without encryption ("seal" on mount) then can you dump the decryption keys for the trace you run (as described in the link above)
Two additional questions:)
2) Does mounting with "vers=2.1" work?)
If I fail to specify the vers= option. Mounting fails completely. Dmesg says our usual:
[162979.482987] CIFS: VFS: \\nonsuch failed to connect to IPC (rc=-11)
[162979.484449] CIFS: VFS: session 0000000083b7840c has no tcon available for a dfs referral request
[162979.485899] CIFS: VFS: cifs_mount failed w/return code = -2
and the Solaris server says:
May 24 23:09:46 nonsuch smbcmn: [ID 997540 kern.warning] WARNING: ../..)!
I have also seen the above Solaris server side messages intermittently and think they may be related to the issue I am experiencing. But if SMBv3 is broken on Solaris, I do not understand why it works fine with MacOS.
2) Does mounting with "vers=2.1" work?
Yes, it works fine. In fact, it is required to make anything work as omitting it results in the above behaviour.
Regarding your other comments about traces and encryption keys, I will need to time to get these, but will indeed do so.
Since the Solaris server logs:
May 24 23:09:46 nonsuch smbcmn: [ID 997540 kern.warning] WARNING: ../../common/fs/smbsrv/smb2_dispatch.c:smb2_dispatch_message:134:Decryption failure (71)!
are you sure that you mounted without encryption (ie did not specify "seal=" on mount)?
If this connection is defaulting to encryption whether or not the client specifies it on mount, that implies that the server is configured with encryption as required ... which is odd - because the server allowed vers=2.1 (which is not encrypted, encryption was added in the SMB3 and later versions of the protocol and not supported with SMB2.1) but fails with vers=3.0 or 3.1.1 (smb3.1.1 is the typical default) which presumably means the server is negotiating with encryption required (but only for a subset of dialects). Strange server configuration.
Can you send or attach the vers=3.1.1 (or default with no vers= specified) wireshark trace so we can see what crypto algorithm the server is defaulting to (even if we can't see the keys - we can see how it is trying to encrypt/decrypt if smb3.1.1 is used instead of smb3.0)
When I omitted vers= as you requested, I definitely did not specify seal. I have attached the server SMB configuration for completeness but the most relevant settings are below:
server_lmauth_level=5
server_minprotocol=2.1
server_maxprotocol=3.1
server_encrypt_data=true
server_reject_unencrypt=false
server_signing_enabled=true
server_signing_required=false
restrict_anonymous=true
I will get a wireshark trace with vers=3.1.1 as you requested shortly.
Created attachment 16628 [details]
Solaris server SMB configuration
Will be interesting to see which crypto algorithm they negotiate - but at least one way around this is to mount with vers=2.1 or change the config line:
server_encrypt_data=true
so that it doesn't end up causing SMB3.0 and later to encrypt until we figure out where the bug is in encryption (presumably is on the server side since encrypted mounts work to every other server from Linux).
Unfortunately, I spent quite a bit of time trying various combination of options and I have to say I have never managed to get SMBv3 to work between Linux and Solaris with or without encryption. E.g. setting vers=3.1.1 without seal and with server_encrypt_data=false on Solaris does not result in successful mounting:
[164849.013736] CIFS: VFS: \\seraphix-3.achelon.net failed to connect to IPC (rc=-13)
[164849.015760] CIFS: VFS: cifs_mount failed w/return code = -13
But let's try and debug one test case at a time so things don't get confusing. I have a wireshark dump with vers=3.1.1 and seal options. I will email to your gmail.
If there is a bug in Solaris SMBv3 encryption handling, I'm perplexed as to why it works fine when doing this from smbclient. E.g. the below appears to work:
smbclient //nonsuch/myshare --debuglevel=10 --encrypt --user=myuser
Based on the trace you sent - can see that when mounting with 3.1.1 (or default which ends up the same thing), the server responds with SMB2_ENCRYPTION_CAPABILITIES set to CipherId: AES-128-GCM which is interesting because that is the 'normal' case we see (Windows, Azure, current Samba server etc.) so this is less likely to be a bug in the client due to falling back to something different than the more common GCM.
Do you have the equivalent trace from smbclient (the Samba userspace tool) to the same share (trying to negotiate SMB3.1.1) for comparison?
In both trace you sent (seal and not specifying seal on the mount) you can see the server is requiring encryption (unless you mount with smb2.1 or earlier). See attached screenshot
Created attachment 16629 [details]
session-setup-response-when-no-seal-on-mount
Understood. Please see your gmail for a pcap and a debug level 10 trace of smbclient successfully negotiating an encrypted SMB 3.1.1 connection with the same Solaris share, whereas the kernel mount for the same failed.
Comparing with smbclient, there are a few interesting things which differ:
a) smbclient sets a default domain name ("SAMBA"). To make this identical for the kernel mount ("mount -t cifs ...") case you could try setting domain= parameter to the same. I doubt this will make a difference because in neither case does the server indicate in its SessionSetup response that authentication ended up as 'guest' so presumably Solaris server thinks the user authenticated properly in both cases (albeit it could be a very unlikely case where "SAMBA/username" is different than "username")
b) there are some NegotiateFlags (NTLMSSP flags) set differently during negotiation:
1) smbclient sets "Negotiate Version"
2) cifs.ko sets "Negotiate Seal" and "Negotiate Target Info" and "Negotiate 56"
but otherwise the flags look the same.
c) smbclient sends both an old Lanman (Lanmanv2) and NTLM (NTLMv2) response in the NTLMSSP_AUTH SessionSetup request, but zeroes the Lanman field, while cifs.ko doesn't send Lanman. This is unlikely to be related
Key next steps:
1) seeing if it is possible to decrypt the wireshark trace (see link provided earlier for instructions) although this may require rebuilding the cifs.ko to dump keys (does Solaris have a way to view encrypted traces taken on the server side?)
2) looking in more detail at the server. It doesn't indicate why it was rejected:
./..)!
Do you have any contacts with Solaris support to see if they can see if they can provide more information? My guess is that they don't like the format or the flags of something specified in the tree connect (since this works to every other server type) - but they may also have some subtle rounding error with decryption on their server side.
Googling for the error I do see one other report of similar sounding problem to Solaris so it has been around a long time (see)
I'm sure it won't shock you, but adding domain=SAMBA to the mount options hasn't miraculously fixed this, but at least it's another data point.
I am going to see if I can rebuild cifs.ko with the right options to dump the keys. But this is a lot of work - is it likely to lead to anything useful?
The reason that it is of some value is that if wireshark can decrypt it and shows no errors (ie decrypt the first encrypted frame, the SMB3.1.1 tree connect request) then it is even more likely a server bug ... (perhaps some strange case where they expect a padded response that has a length divisible by 8 or some such bug)
if you have access to the source RPM then rebuilding it might only take a few minutes (you could e.g. just remove the 2 ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS in fs/cifs/smb2transport.c
e.g. remove the ifdef and endif here (and the one before that in the same file)
#ifdef CONFIG_CIFS_DEBUG_DUMP_KEYS
cifs_dbg(VFS, "%s: dumping generated AES session keys\n", __func__);
/*
* The session id is opaque in terms of endianness, so we can't
* print it as a long long. we dump it as we got it on the wire
*/
cifs_dbg(VFS, "Session Id %*ph\n", (int)sizeof(ses->Suid),
&ses->Suid);
cifs_dbg(VFS, "Cipher type %d\n", server->cipher_type);
cifs_dbg(VFS, "Session Key %*ph\n",
SMB2_NTLMV2_SESSKEY_SIZE, ses->auth_key.response);
cifs_dbg(VFS, "Signing Key %*ph\n",
SMB3_SIGN_KEY_SIZE, ses->smb3signingkey);
if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||
(server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)) {
cifs_dbg(VFS, "ServerIn Key %*ph\n",
SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3encryptionkey);
cifs_dbg(VFS, "ServerOut Key %*ph\n",
SMB3_GCM256_CRYPTKEY_SIZE, ses->smb3decryptionkey);
} else {
cifs_dbg(VFS, "ServerIn Key %*ph\n",
SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3encryptionkey);
cifs_dbg(VFS, "ServerOut Key %*ph\n",
SMB3_GCM128_CRYPTKEY_SIZE, ses->smb3decryptionkey);
}
#endif
You might not need to recompile your kernel.
I have a GDB script (made for 4.4) you might be able to run as root
It reads the keys from kernel memory. The offset (OFF var) might have to be updated for your kernel (or not) but it's worth a try.
(this script just reads from memory, worst case it prints garbage)
Ah sorry, this script assumes you have a successful mount point. If mount fails it won't be of any use.
I will look into getting the decryption keys. I thought maybe I could dump the TreeConnect on the Solaris side using the Dtrace capability but if it's possible I haven't figured it out yet.
I'm baffled as to how MacOS and smbclient work fine but Linux kernel mounts don't.
I guess my fear is that some flag is set wrong, maybe during negotiation, and there is no simple knob we can turn to set it just to test.
> fear is that some flag is set wrong, maybe during negotiation
There isn't an obvious reason why any of the flag differences would matter (unless server bug), but it should be possible to test mount with smb3.1.1 (without encryption) by changing the server config line
server_encrypt_data=true
and make sure server doesn't hang up on tree connect (as it does with encryption)
If we verify that wireshark can decrypt it, then the only strange guesses I can think of that would cause the server to give up on the tree connect are:
1) difference in tree connect flags with smb3.1.1
2) differences in padding of the tree connect request that confuse the server
Unfortunately, as I mentioned by email. The connection does indeed also fail with server_encrypt_data=false.
After setting server_encrypt_data=false I try to mount with vers=3.1.1 and without the seal option (since encryption is off on the server). This results in the following mysterious error:
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
[249955.014343] CIFS: Attempting to mount //nonsuch/myshare
[249955.020762] CIFS: VFS: \\nonsuch failed to connect to IPC (rc=-13)
[249955.023383] CIFS: VFS: cifs_mount failed w/return code = -13
see wireshark trace in your email "3.1.1_encryptionoff.pcap".
Trying with smbclient on the same share with the same user and same password with server_encrypt_data=false on the server and no --encrypt option results in success:
smbclient //nonsuch/myshare --debuglevel=10 --user=myuser
see wireshark trace in your email "3.1.1_smbclient.pcap".
Presumably Solaris server doesn't like the signature (which is odd). Let's compare with SMB3.0 as well if possible. Was the SMB2.1 mounts (which succeeded) signed? That is weird if signed works with 2.1 and fails with 3.0 and 3.1.1
The server says:
May 26 19:53:58 nonsuch smbsrv: [ID 211007 kern.warning] WARNING: bad signature, cmd=0x3
so you are right, it doesn't like the signature.
I will obtain wireshark traces with vers=3.0 (and equivalent smb.conf
option for smbclient) set for comparison.
Note that in vers=3.1.1 the TreeConnect calls are always signed
which means that client and server must agree on what the session key is.
Just like when encryption is used.
Try to see what happens if you use vers=3.0 in this scenario as the TreeConnect will not be signed.
It does sound like there is something wrong with the session key and client and server disagree on it.
with vers=3.0 (and without seal) specified on the client and server_encrypt_data=false specified on the server, the following new fun happens:
mount error(2): No such file or directory
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
and dmesg says:
[261829.875860] CIFS: Attempting to mount //nonsuch/myshare
[261829.891575] CIFS: VFS: \\nonsuch\IPC$ validate protocol negotiate failed: -13
[261829.893269] CIFS: VFS: \\nonsuch failed to connect to IPC (rc=-5)
[261829.895940] CIFS: VFS: \\nonsuch\myshare validate protocol negotiate failed: -13
[261829.897752] CIFS: VFS: session 00000000bfe08581 has no tcon available for a dfs referral request
[261829.900102] CIFS: VFS: cifs_mount failed w/return code = -2
I also notice a new message on the server server. Note that cmd=0x3 is now cmd=0xb - whatever that signifies.
May 26 23:11:53 nonsuch smbsrv: [ID 211007 kern.warning] WARNING: bad signature, cmd=0xb
A wireshark trace for the above has been emailed to you "3.0_mount.pcap"
As usual, for comparison smbclient works flawless (hopefully -m SMB3 was the right way to force SMB 3.0 dialect...)
smbclient //nonsuch/myshare -m SMB3 --user=myuser
A wireshark trace for the above smbclient interaction has been emailed to you: "3.0_smbclient.pcap"
Throughout all this dialogue with you, signing on the server has been 'enabled', but not 'required'.
BUT in 3.1.1 signing is required for all TreeConnect calls (0x03) always, by the protocol.
In 3.0 and 3.0.2 signing is required for all Ioctl:fsctl_validate_negotiate_info calls. Also by protocol requirements.
The command code for Ioctl is 0x0b, which you saw in the log.
At least it got the mount process a bit further by using vers=3.0 since we got past the TreeConnect. I forgot that in 3.0 we have that Ioctl call.
It could have worked, but I forgot about the Ioctl :-(
Anyway, this is another datapoint that tells us that the issue is that it is related to the session key imo.
Thanks for your help it - does seem that you've narrowed down the cause a little. Is it still required for me to dump those keys?
As requested I am emailing you details of the successful SMB2.1 negotiation using both smbclient:
smbclient //nonsuch/myshare -m SMB2 --user=myuser
and the CIFS mount command (with vers=2.1 and without seal).
(In reply to Richard Flint from comment #33)
From 'man smb.conf'
...
SMB2: Re-implementation of the SMB protocol. Used by Windows Vista and.
...
So '-m SMB3' is the same as '-m SMB3_11', I guess to want
'-m SMB3_00' instead in order to force 3.0.0
Thanks for the correction, you are quite right, the comparison file should have been done with -m SMB3_00 not -m SMB3.
I have created two new files:
SMB3_00_smbclient.pcap created with command:
smbclient //nonsuch/myshare -m SMB3_00 --user=myuser
and
SMB3_00_encrypt_smbclient.pcap created with command:
smbclient //nonsuch/myshare -m SMB3_00 --encrypt --user=myuser
I am emailing the traces to Steve French's gmail.
Appreciate it has been sometime since this was updated but I just wanted to update this for completeness. I have tested this on Fedora 35 (5.16.9-200.fc35.x86_64) and can confirm successful negotiation of 3.0 3.0.2 and 3.1.1 with Solaris 11.4 servers both with and without encryption (as enabled by the seal parameter).
E.g. the following is successful:
//myserver/myshare/myfolder /mnt/myserver/myfolder cifs noauto,nounix,vers=3.1.1,seal,noserverino,ro,_netdev,noexec,nosuid,perm,nodev,iocharset=utf8,cache=strict,sec=ntlmv2,credentials=/root/password,port=445,context="system_u:object_r:myapp_content_t:s0",forceuid,forcegid,file_mode=0440,dir_mode=0550,uid=1000,gid=1001 0 0
Though noisy. E.g.:
[Sat Feb 19 08:34:30 2022] CIFS: decode_ntlmssp_challenge: authentication has been weakened as server does not support key exchange
[Sat Feb 19 08:34:30 2022] CIFS: VFS: \\myserver\myshare error -9 on ioctl to get interface list
[Sat Feb 19 08:34:30 2022] CIFS: VFS: \\myserver\IPC$ smb2_get_dfs_refer: ioctl error: rc=-19
Intriguingly, despite specifying nounix in the mount, Wireshark shows we are still sending SMB2_POSIX_EXTENSIONS_CAPABILITY in the Negotiate Protocol Request - I'm not clear if that is the desired behaviour.
The issue is still reproducible on the latest CentOS 8 Stream release, but that it works on Fedora 35 makes we wonder if an issue was fixed in the meantime that never got back-ported to CentOS 8. If that's true, then this isn't really a fault in the CIFSVFS product itself I think - or maybe it isn't anymore.
Richard, If CentOS stream is missing backporting a fix for the past year to their 4.18 kernel (which is fairly old), would be less confusing to follow up with them. Do you remember if we narrowed down on the email thread what the change was that fixed this? | https://bugzilla.samba.org/show_bug.cgi?id=14713 | CC-MAIN-2022-27 | refinedweb | 3,793 | 64.51 |
Back to index
#include <nsTLSSocketProvider.h>
Definition at line 55 of file nsTLSSocketProvider.h.
Definition at line 44 of file nsTLSSocketProvider.cpp.
{ }
Definition at line 48 of file nsTLSSocketProvider.cpp.
{ }
addToSocket
This function is called to allow the socket provider to layer a PRFileDesc on top of another PRFileDesc. For example, SSL via a SOCKS proxy.
Parameters are the same as newSocket with the exception of aFileDesc, which is an in-param instead.
newSocket
PROXY_RESOLVES_HOST.
This flag is set if the proxy is to perform hostname resolution instead of the client. When set, the hostname parameter passed when in this interface will be used instead of the address structure passed for a later connect et al. request.
Definition at line 108 of file nsISocketProvider.idl. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/classns_t_l_s_socket_provider.html | CC-MAIN-2017-51 | refinedweb | 125 | 59.4 |
Hi there,
I have run into TLE with my first solution that used a set for memorizing ugly numbers we have seen:
'''
def mySuperUglyNumber(n, primes):
uglies, h, haveSeen = [],[1], set()
while len(uglies) < n:
act = heapq.heappop(h)
for p in primes:
if act * p not in haveSeen:
heapq.heappush(h, p * act)
haveSeen.add(act * p)
uglies.append(act)
return uglies[n-1]
'''
Then I have come up with another approach that gets rid of the set and which was supposed to run a lot faster.
'''
def mySuperUglyNumber2(n, primes):
uglies, h = [],[(1,1)]
while True:
act = heapq.heappop(h)
for p in primes:
if p >= act[1]:
heapq.heappush(h, (p * act[0],p))
uglies.append(act[0])
if len(uglies) == n: return uglies[-1]
'''
It did solve more test cases. Nevertheless, when I compared the performance of the generator solution and the above two these seem to perform fairly close to one another (sometimes the second solution got worse running time).
I would appreciate any insightful comment on what is really going on with this second solution. | https://discuss.leetcode.com/topic/103744/please-help-explain-the-experienced-speed-diff | CC-MAIN-2018-05 | refinedweb | 181 | 65.62 |
Defines master page–specific (.master file) attributes that are used by the ASP.NET page parser and compiler.
<%@ Master attribute="value" [attribute="value"...] %>
Term
Definition
AutoEventWireup
Indicates whether simple event handlers can be defined for specific life cycle stages using the syntax Page without any explicit hookup or event signature. true if event auto-wiring is enabled; otherwise, false. The default is true. For more information, see ASP.NET Web Server Control Event Model.
ClassName
Specifies the class name for the class that is automatically generated from the markup and compiled when the master page is processed. This value can be any valid class name and can include a namespace.
CodeFile
Specifies the name of a separate file that contains a partial class with the event handlers and other master page–specific code. For more information, see ASP.NET Web Page Code Model.
CompilationMode
Specifies whether to compile an ASP.NET master page at run time. Options are Always to always compile the page; Auto, if ASP.NET is to avoid compiling the page, if possible; and Never, to never compile the page or control. The default is Always.
CompilerOptions
Provides a string containing compiler options to use to compile the page. In C# and Microsoft Visual Basic, this is a sequence of compiler command-line switches.
Debug
Indicates whether to compile the master page with debug symbols. true, to compile with debug symbols; otherwise, false.
Description
Provides a text description of the master page. This value is ignored by the ASP.NET parser.
EnableTheming
Indicates whether the appearance of the master page and of controls on the master page can be modified, if a theme is applied. true if a theme can be applied; otherwise, false. The default is true. Setting the EnableTheming attribute is primarily useful when a page theme is defined in the Web.config file and applies to all pages, by default. For more information, see ASP.NET Themes and Skins Overview.
EnableViewState
Indicates whether view state is maintained across page requests. true to maintain view state; otherwise, false. The default is true.
Explicit
Determines whether the page is compiled using the Visual Basic Option Explicit mode. true indicates that the Visual Basic explicit compile option is enabled and that all variables must be declared using a Dim, Private, Public, or ReDim statement; otherwise, false. The default is false.
The Explicit attribute is set to true in the Machine.config file. For more information, see Machine Configuration Files.
Inherits
Specifies a code-behind class for the page to inherit. This can be any class derived from the MasterPage class. For information about code-behind classes, see ASP.NET Page Class Overview.
Language
Specifies the language used when compiling all inline rendering (<% %> and <%= %>) and code declaration blocks within the page. Values can represent any language that is supported by the .NET Framework, including VB (Visual Basic), C#, and JScript.
LinePragmas
Determines whether the runtime should generate pragmas in the generated code.
MasterPageFile
Specifies the .master file that acts as a master page for a master page. The MasterPageFile attribute is used in a master page when defining a child master page in a nested master-page scenario. For details, see Nested ASP.NET Master Pages.
Src
Specifies the source file name of the code-behind class to dynamically compile when the page is requested. You can choose to include programming logic for your page either in a code-behind class or in a Code Declaration Blocks in the .aspx file.
Strict
Specifies whether to compile the page using the Visual Basic Option Strict mode. true if Option Strict is enabled; otherwise, false. The default is false.
WarningLevel
Specifies the compiler warning level at which you want the compiler to abort compilation for the page. Possible values are from 0 through 4. For more information, see WarningLevel()()().
You can use the @ Master directive only in master pages. Master pages are defined in files with the .master extension. You can include only one @ Master directive per .master file..
<% @ Master Language="VB" CodeFile="MasterPageSample.master.vb" Inherits="MasterPageSample" %> | http://msdn.microsoft.com/en-us/library/ms228176.aspx | crawl-002 | refinedweb | 678 | 61.33 |
nothing can rival the XML ecosystem of tools and libraries and standards, except for the Web itself
--Stefan Tilkov on the rest-discuss mailing list, Saturday, 29 Dec 2007 14:22:52
It is the business reality that availability is more important than data consistency for certain classes of applications. A lot of the culture and technologies of the relational database world are about preserving data consistency [which is a good thing because I don’t want money going missing from my bank account because someone thought the importance of write consistency is overstated] while the culture around Web applications is about reaching scale cheaply while maintaining high availability in situations where the occurence of data loss is unfortunate but not catastrophic (e.g. lost blog comments, mistagged photos, undelivered friend requests, etc).
--Dare Obasanjo
Read the rest in Dare Obasanjo aka Carnage4Life
The).
--Jakob Nielsen
Read the rest in Web 2.0 Can Be Dangerous (Jakob Nielsen's Alertbox)
AOL has never known what to do with Netscape. They squandered that asset. Eventually, the bottom dropped out.
--Greg Sterling
Read the rest in Macworld: News: Can AOL keep Netscape.com from fading away?
Once, while speaking to a "technical manager" at Comcast, I was told that their email service is "consumer grade". Someone wanting to avoid missing any email should use a different service. He thought that Comcast's business-level accounts would serve, but he wasn't sure. It would be so much easier if Comcast *advertised* that fact to its clients.
--Andrew Gideon on the wwwac mailing list, Monday, 24 Dec 2007 14:29:35 +0000
So now we have Web 2.0, and everyone is a content provider. The content provided by most people is their self. Their actual being, with all the wry, seasoned, insightful anecdotes they have acquired in their 13-to-18 years of living on this earth and observing the foibles and follies of their fellow men and women. We all know what richness lies there, and how these founts of erudition have made life better for all around them. The point of these sites is to offer just enough interactivity to an otherwise passive pursuit to make the viewer feel that they "own" the place. You stroke their ego, make it easy for them to have "friends", let them stroke others and be stroked in return. And while they're preening and posing, they're staring at a car ad, or an add for the lastest Nike sneaker, or whatever the algorithm decides they are most likely to click on and, perhaps, buy.
--Alan Brooks on the wwwac mailing list, Wednesday, 07 Nov 2007 17:05:27
The Web Browser did what Java could not - create a platform for the deployment of rich user experiences - and by being sufficiently generic in the implementation of the mechanisms of what a browser does, and by the successful implementation of the plug-in concept, we now have a "Write a few times, run almost anywhere" model, and although this is not quite "write once, run anywhere" it's good enough, and good enough usually beats out the endless quest for perfection which inevitably leads to madness. At this point, the browser can do 80% of what the desktop application can do, the 20% it can't do is sometimes of no consequence.
--Martin Focazio on the wwwac mailing list, Wednesday, 10 Oct 2007 13:59:41.
--Roger Johansson
Read the rest in Lame excuses for not being a Web professional | 456 Berea Street
I don't think standards writing should be geared around malfunctioning tools.
--G. Ken Holman on the xsl-list mailing list, Friday, 03 Aug 2007 07:30:53
The point of WebKit for Apple was to define an open source standard for rendering web pages on all sorts of Internet-enabled devices. This also explains why Apple used KHTML instead of Gecko or its own web engine for Safari -- even though KHTML was terrible at rendering web pages that were optimized for Internet Explorer. KHTML is the only rendering engine that can pass the Acid2 web-rendering test, and following a standard was more important to Apple than correctly rendering poorly written web pages.
--Mark Stephens
Read the rest in I, Cringely . The Pulpit . Kindling | PBS
in English at least, all acronyms and initialisms are abbreviations, but not vice versa. That is, the set of English acronyms and the set of English initialisms are disjoint subsets of the set of English abbreviations. Furthermore, there is a non-empty set of English abbreviations that contains no English initialisms nor English acronyms.
--Sam Kuper on the whatwg mailing list, Sunday, 13 Dec 2007 00:49:14
Again, I say there's nothing unRESTful about cookies. It's putting a session id in a cookie and hiding data in the session on the server that's wrong.
--Nic Ferrier on the REST Discuss mailing list, Sunday, 20 Apr 2006 23:48:33.
--Kurt Cagle
Read the rest in xforms vs. ruby - a rebuttal (sort of)
one advantage XML offers over things like TSV and XDR is a certain measure of future-proofing. Change your RDBMS data dictionary and TSV instances out the wild often become toast. Same thing for XDR - in fact direct object serialization is almost always *wrong*. Anyhow, because XML has all these verbose labels saying what each chunk is, it tends to be more change-resistant than most of what has come before.
--Tim Bray on the xml-dev mailing list, Monday, 16 Sep 2002
At Yahoo we have a couple of extra copies of the Web sitting around.
--Micah Dubinko, Yahoo, at XML 2007, Tuesday, December 4, 2007
OfficeOpen XML is really cool because it's XML and you can mess with it.
--Mark Turner, Mark Logic, at XML 2007, Monday, December 3, 2007
XML will die when you rip it out of my cold dead hands.
--C. Michael Sperberg-McQueen at XML 2007, Monday, December 3, 2007
And with textual vs binary XML, you don't just have to overcome inertia, you have to overcome the fact that a textual format has very considerable advantages in terms of the ability of humans to read and edit the content directly. Look at the xsl-list - how many people would offer free advice and help on debugging XSLT stylesheets if the source documents were supplied in binary rather than textual form? Human performance is much more important than machine performance.
--Michael Kay on the xml-dev mailing list, Sunday, 10 Jun 2007 18:53:17
But consider — if one browser showed error messages on half the Web, and another browser showed no errors and instead showed the Web roughly as the author intended. Which browser would the average person use?
If we want to make HTML 5 successful, we have to make sure the browser vendors pay attention to it. Any requirements that make their market share go down relative to browsers who aren't following the spec will immediately be ignored.
--Ian Hickson
Read the rest in Conversation With X/HTML 5 Team.
--Mark Stephens
Read the rest in I, Cringely . The Pulpit . When Networks Collide | PBS
one of the essenses of Schematron is the natural language assertion: the grammar- based schema languages all have the fundamental problem that they don't have any mechanism for effectively communicating to humans diagnostics expressed in terms of the problem domain and data graph: they can only give generic messages in terms of grammar theory, the XML tree and the specific element names. One consequence of this is that as soon as the XML is hidden by some interface, the canned validation messages (which are given in terms of the XML and grammar) become incomprehensible.
--Rick Jelliffe on the xml-dev mailing list, Wednesday, 29 Nov 2006 16:34:47
Be sure to measure latency as well as throughput. The old engineering proverb is "Bandwidth can be bought, but latency is forever."
--Greg Guerin on the java-dev mailing list, Sunday, 25 Nov 2007 12:14:59.
--Bruce Tognazzini
Read the rest in Manufacturer Sites that Sell
But it's always said, "The business is dying! The business is dying!" I don't think so. There's too many good musicians around for the music business to be sagging. There's so many different styles and facets of the 360-degree musical sphere to listen to. From tribal to classical music, it's all there. If the bottom was to sag out of that, for God's sake, help us all.
--Jimmy Page, 1975
Read the rest in Cameron Crowe
I don't often sympathize with MSFT. However, I expect that their corporate heads are dazed, confused and downright annoyed by Google's ongoing grab for private information. The slightest attempt at MSFT to do the same - ie. have their tools and environment and such "phone home" with information from the user's computer - has been analyzed and studied and any potential privacy violations denounced as A Great Evil.
And then people just hand over the same information to Google [for free].
--Andrew Gideon on the wwwac mailing list, Sunday, Fri, 27 Apr 2007 11:46:32
Creating your own blog is about as easy as creating your own urine, and you're about as likely to find someone else interested in it.
--Lore Sjöberg
Read the rest in Wired News: The Ultimate Blog Post
I used XOM pretty heavily in a recent NIH project on MacOS X under 1.5, and performance was good, memory usage was good, no major parser bugs bit me.
--Scott Ellsworth on the java-dev mailing list, Tuesday, 19 Jul 2005 10:51:35.
--Michael Arrington
Read the rest in Techcrunch » Blog Archive » Ten Things I Wish IE 7 Was About to Deliver
The server can do anything it wants to handle a GET request, involving any side effects whatsoever. However, there is a clear understanding that a GET request from a client can never be construed as a demand for any of these side effects. The client bears no blame for issuing a GET request that caused the server to do something untoward; if something undesirable happened, it’s the server’s fault.
--Aristotle Pagaltzis on the rest-discuss mailing list, Saturday, 27 Oct 2007 19:04:21
The complexity of RDF is vastly overstated
--Brendan Taylor on the atom-syntax mailing list, Saturday, 6 Oct 2007 09:23:16
Programmers are like most people in that they have an investment in what they've already learned, and are much less likely to adopt something new unless they can see many benefits to their work. In many respects XML is a fairly serious investment, as you are changing the very workflow patterns that people have developed. People may want improvements, but anything that disrupts their workflow will tend to make them much more anxious about learning anything new.
--Kurt Cagle on the xml-dev mailing list, Friday, 28 Jan 2005 11:17:32
saying I need a fast parser is a bit like saying I need a fast car. What you mean by fast may depend on whether you're driving Nascar, Formula 1, or just trying to make good time on a vacation.
--Noah Mendelsohn on the xml-dev mailing list, Monday, 22 Oct 2007 17:18:23
Of *ALL* of the client-side standards, XSLT is by far and beyond the most reliable when it comes to cross-browser support.
--M. David Peterson on the xsl-list mailing list, Sunday, 16 Sep 2007 11:54:27
Si les validateurs de schémas tentent généralement d'implémenter l'intégralité des recommandations W3C XML Schema, les outils de data binding guidés par des schémas ont au contraire tendance à ne supporter que des sous ensembles de ces recommandations. Et comme ces sous ensembles sont différents, un schéma écrit pour valider des documents a peu de chances de fonctionner avec ces outils et schéma qui fonctionne avec l'un de ces outils ne fonctionne pas nécessairement avec les autres.
--Eric van der Vlist
Read the rest in XML 2006 : souvenirs, souvenirs
One of the key reasons for XML's success is its high degree of platform independence.
--Tim Bray on the xml-dev mailing list, Tuesday, 09 Apr 2002
you can't distinguish between a character represented natively, and the same character represented as an entity reference. And in your application, you shouldn't, because you really don't want to constrain the document creator/sender to use one form rather than the other.
--Michael Kay on the xsl-list mailing list, Saturday, 3 Nov 2007 08:49:51
It would have been nice if the W3C had used its influence in the browser space to define standards for browser UI. Things like session logout, usable login dialogs, and the lack of standard affordances for PUT, DELETE, etc., are all left undone because "that's how Mosaic did it" was the only standard that mattered.
--Roy T. Fielding on the REST Discuss mailing list, Monday, 2 Oct 2006 16:14:46.
--Don Reisinger
Read the rest in Say goodbye to Blockbuster | The Digital Home - Don Reisinger blogs about the tech closest to home
At best, we have a fundamental conflict of visions and technical values between the majority and the minority..
--Brendan Eich
Read the rest in Brendan's Roadmap Updates: Open letter to Chris Wilson.
--Joe Gregorio
Read the rest in Joe Gregorio | BitWorking | Do we need WADL?
There are literally dozens if not hundreds of billions of documents already on the Web. A study of a sample of several billion of those documents with a test implementation of the HTML 5 Parser specification that I did at Google put a very conservative estimate of the fraction of those pages with markup errors at more than 78%. When I tweaked it a bit to look at a few more errors, the number was 93%. And those are only core syntax errors — it didn't count misuse of HTML, like putting a p element inside an ol element.
If we required browsers to refuse those documents, then you couldn't browse over 90% of the Web.
--Ian Hickson
Read the rest in Conversation With X/HTML 5 Team.
--James Gosling
Read the rest in James Gosling: on the Java Road
Read the rest in Web 2.0
your homegrown XML is practically RDF already.
--Brendan Taylor on the atom-syntax mailing list, Saturday, 6 Oct 2007 09:23:16.
--Bruce Tognazzini
Read the rest in Manufacturer Sites that Sell
The fact that the core devs don't take documentation, consistency, and unit-testing seriously is a significant point in the argument about whether Wordpress is a business-ready platform or just a toy.
--Mike Purvis on the wp-hackers mailing list, Saturday, 13 Oct 2007 00:22:10
unifying relational data and XML is a good move. Imagine that an organization has data in RDBMS, as well in the XML form. That organization needs to produce something (for e.g. reports of some kind, or say an application) by unifying information from the relational and XML world. Then we need to join data stored in the RDBMS and XML. If we don't have a unified relational/XML store, I think, it'll be slightly difficult to join data from the two worlds (although not difficult for good application programmers).
--Mukul Gandhi on the xml-dev mailing list, Friday, 19 Oct 2007 22:37:12
The health care industry has embraced XML probably more than any other. It's virtually impossible to build EHR/EMR software without XML support and still be compliant.
--Steve Manes on the NYPHP Talk mailing list, Tuesday, 14 Aug 2007 11:22:05
the DOM-like approach of defining an IDL interface and then adapting it to Java leads to poor usability; I would prefer something designed specifically for Java.
--Michael Kay on the saxon-help mailing list, Wednesday, 2 Aug 2006 08:41:56
It's a rare pleasure to come across a user interface on the Web that uses dialog controls correctly. Even something as simple as radio buttons and checkboxes are incorrectly used half the time. And let's not even get started on drop-down menus, which are horribly abused, or the homemade scrollbars that deface most Flash sites.
--Jakob Nielsen
Read the rest in Tabs, Used Right: The 13 Usability Guidelines (Jakob Nielsen's Alertbox)
I've seen a few standardized XML schemas in vertical industries are actually invalid schemas but made it all the way to becoming standards because the schema authors used XML Spy as their XML editor of choice.
Now companies like Microsoft now have to deal with angry customers who complain that our tools reject their schemas which were authored with the "industry's leading XML tool" or which have now become standards in their particular business sphere.
I've actually seen some people suggest we ship what is basically "XML Spy bug compatibility mode" so that we can interoperate with their tools since they have flagrantly decided to ignore parts of the W3C XML Schema recommendation. It seems that the decision makers at XML Spy fail to realize that the only reason for standardizing on an XML Schema language is so we have interoperability across various platforms and tools.
--Dare Obasanjo on the xml-dev mailing list, Friday, 22 Oct 2004
when I painstakingly put that whitespace in a document to make it easier to edit in the environments I use, I resent editors that completely trash it, not only making it harder to read, but making cvs diffs a lot harder to interpret, since virtually every line has changed by simple virtue of using that particular editor on a file, changing perhaps one character.
--Jonathan Robie on the xml-dev mailing list, Wednesday, 17 Oct 2007 21:54:41
I'm a huge fan of SQL, and I've been using it for over 10 years. its a solid and reliable friend. But it seems to be far too wordy and gets hairy to maintain, which is why we tend to look for ways to modularize it within our programming languages. When using SQL, we're just working with strings. mysql_query('SELECT * FROM customers') is as painful as using innerHTML in javascript. In some instances, you just have to, but it 'feels right' to use the DOM, and the DOM allows so much more power from a javascript perspective.
--Mark Armendariz on the NYPHP Talk mailing list, Saturday, 15 Sep 2007 15:24:30
if it's a closed system with specific clients, there likely will not be any benefit to using Atom. If you wish to enable interchange and interop with other applications, there will be benefits to using Atom, if only to leverage the existing tool support.
--James M Snell on the atom-syntax mailing list, Friday, 05 Oct 2007 17:06:59
One of the leading financial institutions in New York adopted XML about 7 years ago because they need to archive stock and trading information for as long as 20 years. They'd had problems with obsolete media and file formats, such as WordPerfect 5.1.
--Ken North on the xml-dev mailing list, Sunday, 9 Sep 2007 11:50:01
For a fine example of redundancy, consider all of the forms with a pop-up list asking for type of credit card. Why? Credit card numbers come in predefined patterns. If it starts with a 4, it's a Visa. If it starts with a 5, it's a MasterCard. There is no reason for any Web form to ask me what type of card I'm using when it's about to get the card number.
--Peter Seebach
Read the rest in The cranky user: Ho ho hum online retailers
I want the markup to explicit and self-documenting, as opposed to off in the PSVI which you only compute by fetching another (potentially large & complex) resource and processing it. I have grave concerns about the PSVI in general and its "implicit claims to be generic" in particular.
--Tim Bray on the xml-dev mailing list, Monday, 30 Sep 2002
ISO is an organization designed to handle negotiations among countries. That's not a bad idea for things that tend to depend on heavy national regulation, like, say, smokestack industry or retail, but it makes little sense for computer technology and networking standards -- our problem is not getting the Ukraine, Tanzania, and New Zealand to use the same standards, but getting the open source people, Nokia, IBM, and Nortel to play nicely together. Using ISO's national-body structure for negotiating computer standards is about as effective as the two of us negotiating a mideast peace plan, then expecting Yasser Arafat and Ariel Sharon to thank us and implement it.
--David Megginson on the XML Developers mailing list, Thu, 29 Apr 2004
you shouldn't expect any performance or memory size improvements in going from DOM to JDOM (and quite possibly some loss of performance). As I see it JDOM's advantage is mainly in supplying an API that's easier for Java developers to understand.
--Dennis Sosnoski on the jdom-interest mailing list, Wednesday, 08 Dec 2004
five years from now if someone is still coding in HTML and tables and not fully CSS compliant and using XHTML and XML to address multiple devices I'll fire them, or at least sent them to some classes, which is more compatible with my value system.
--Robert Harrison on the wwwac mailing list, Sunday, 27 Sep 2007 12:06:12
These days, with the advent of Petabytes in a trailer and data clouds that can be rented for pennies per MB / year there really isn't any reason to leave the data on some offline archival device. Once it's online the mechanics of access are pretty much irrelevant.
--Peter Hunsberger on the xml-dev mailing list, Tuesday, 11 Sep 2007 16:01:45.
--Kurt Cagle on the XML Developers List, Sunday, Feb 2005 10:52:23
The emergence of XML as a dominant data container came as a result of cheap processing cycles and memory, not because of sudden realizations of its utility for non-page-based applications. The uptake of that idea on the web was quick and that gives the appearance of invention where it is only broader acceptance.
--Len Bullard on the richard mailing list, Wednesday, 26 Sep 2007 09:29:33
Net Neutrality once was called Common Carriage. Today most home users of the Net have incoming port 80 connections squelched. This is a gross violation of the rule of common carriage.
The issue is not price of bandwidth. The issue is whether we are allowed to create and use Net applications, without having to make a deal with the Duopoly.
--Jay Sulzberger on the WWWAC mailing list, Friday, 28 Sep 2007 12:39:13 -0400
the industrial markup and publishing community is not on the radar of database companies. Witness the complete disregard of our needs in the XML Schema development process, which in turn lead to the move to ISO and the progressive development of DSDL, which has become popular and useful in its small niche but has absolutely no commercial support for the big vendors. I don't expect you guys to understand what we in the industrial markup and publishing community do, or the value that having a standardized baseline format for document conversions out of Office (e.g. to ODF or any other target) would have for us. But please don't treat this as merely a game between the elephants.
--Rick Jelliffe on the xml-dev mailing list, Tuesday, 14 Aug 2007 05:35:42 +1000
I remain convinced that namespaces were a colossal mistake.
--Michael Kay on the xom-interest mailing list, Monday, 24 Jan 2005 17:52:22?
--Simon Morris
Read the rest in Simon Morris's Blog: Why Rich Internet Apps Will Fail.
--Joe Gregorio
Read the rest in Joe Gregorio | BitWorking | Do we need WADL?
XML became possible only once the costs of memory and CPUs dropped, the power increased and UNICODE became available
--Len Bullard on the xml-dev mailing list, Tuesday, 11 Sep 2007 14:50:29
There are times we need to leverage what we learned in the past and retrieve information from decades ago. NASA has been drawing on Apollo program technology for building the new Ares 1 moon rocket. (Some of the young engineers on the Constellation program weren't even alive in 1969!) NASA has been visiting museums and borrowing artifacts such as the Apollo operations manual.
Apollo was pre-SGML. Automated word processing in that era was the Friden Flexowriter, which produced a paper tape, and IBM's new MTST (Magnetic Tape Selectric Typewriter). While working on the Goddard Real-Time System (GRTS), I saw neither. We kept computer printouts of source code and link edits, but our office documents were all produced with an IBM Selectric and distributed as Xerox copies. Searching for documents that reference GRTS, I found a couple of PDFs in the NASA archives that are scans of '60s documents. The printed documents from that era are still readable today, but I'm not sure about being able to retrieve a document from a Flexowriter tape or IBM MTST tape.
So even with standard office file formats, there's still the problem that electronic documents may not be retrievable in the future due to changing digital media technology.
--Ken North on the xml-dev mailing list, Sunday, 9 Sep 2007 11:50:01
It started in MA and went around the world - a realisation by government that electronic documentation has really replaced paper in a very large number of cases. And from that follows the requirements for the law to continue functioning in a fair and open manner that electronic documents used by government and public companies - at least - should be accessible on a permanent basis irrespective of the existence, let alone success or failure, of the developer of the electronic format.
--Rick Marshall on the xml-dev mailing list, Sunday, 09 Sep 2007 09:36:14
One of XML's greatest strengths is at the low end, with small applications and even one-offs that don't get very far from home. While these don't get any glory, the gains in productivity they add to any shop that knows how to build and use them are probably impossible to calculate, but not small.
--Wendell Piez on the xsl-list mailing list, Sunday, 06 Sep 2007 14:47:48 liked the final Namespace spec, even though it wasn't what I had originally argued for, but when you have a spec that almost *everyone* ignores or gets wrong (XSLT and SOAP excepted), it might be time to acknowledge that the problem is the spec instead of the implementors. I predict that the use of XML Namespaces will be an ongoing problem for Atom, even though it's not Atom's fault.
--David Megginson
Read the rest in ongoing · Bad, Feed Readers, Bad!
Unfortunately our release cycles here at MS tend to be very long and conservative, at least compared to some of you cowboys. :)
--Joe Cheng, Microsoft on the atom-pub mailing list, Wednesday, 8 Aug 2007 11:25:00
the only way to get namespaces that wrong is not to use a proper XML parser, or to let it run in non-namespace aware mode.
--Julian Reschke
Read the rest in ongoing · Bad, Feed Readers, Bad!
CSS and Javascript debugging tools are of poor quality or non-existent.
--M. David Peterson on the xsl-list mailing list, Sunday, 16 Sep 2007 11:54:27
There are applications which serialize Xerces' DOM using Java's object serialization services which rely on these classes being compatible from release to release. Aside from moving around and removing transient fields, it will be difficult to trim the size of the DOM implementation without breaking serialization compatibility. Probably seemed like a good idea at the time but making all the classes implement java.io.Serializable has significantly reduced our ability to make structural changes.
--Michael Glavassevich on the j-dev@xerces.apache.org mailing list, Sunday, 13 Nov 2005 12:24:52
One popular technique for building readership is to send e-mail to more well-trafficked blogs offering to exchange links with them. One popular response from those blogs is to laugh derisively and hit the Delete button.
--Lore Sjöberg
Read the rest in Wired News: The Ultimate Blog Post
Ninety percent of web design is redesign.
--Jason Santa Maria
Read the rest in An Event Apart Boston 2007
None of the binary XML formats we've seen greatly reduce the bandwidth or processor burden of XML in general. If you have a very specific scenario, you can get some good results, but those same techniques seldom carry over to other scenarios.
--Michael Champion on the xml-dev mailing list, Monday, 3 Sep 2007 15:38:29 -0700
True 'citizen journalists' are people like Iraqi news journalists working where western photographers dare not go, to document the destruction of their homeland. Despite putting themselves and their families in peril 24 hours a day, most if not all of them earn a pittance and many relinquish their copyright on images and stories which make the front pages of the worlds newspapers. Just this year alone, 32 have died.
Baghdad has a mobile phone network, but mobile phone image gathering is virtually unknown (unless it's execution footage), as it would be tantamount to a death sentence for most residents. Instead, another form of journalism keeps us passively 'informed' from only one perspective - embedding.
--Sion Touhig
Read the rest in How the anti-copyright lobby makes big business richer
JavaScript is clearly a powerful drug. Everybody that sells it will please include these two documents in the package...
"Powerful languages inhibit information reuse." --
"see how we can use Javascript, but still maintain accessibility" --
--Dan Connolly on the www-tag mailing list, Sunday, 16 Aug 2007 17:46:03
It's very much a design assumption in XML schema that a namespace has only one schema.
It's a slightly odd assumption really, because it's at variance with another design principle of XML Schema, which is that the same document can be validated against different rules depending on the user's preferences - for example the sender of a document might apply stronger validation than the recipient. But the assumption is there.
The assumption seems to be less strong in the case of the not-a-namespace, otherwise facilities like chameleon schemas wouldn't be provided. But it's still there. You get into trouble, for example, if you try to do a schema-aware transformation or query from one no-namespace schema to a different no-namespace schema.
--Michael Kay on the xml-dev mailing list, Wednesday, 22 Dec 2004.
--Rick Jelliffe on the xml-dev mailing list, Wednesday, 29 Nov 2006 12:46:06
there aren’t that many apps where parsing and unparsing are a significant part of the workload.
--Tim Bray
Read the rest in ongoing · JSON and XML
As an XML developer, one of the problems that I come across almost invariably within these languages is the fact that they are shaped by people who view XML as something of an afterthought, a small subset of the overall language that's intended to satisfy those strange people who think in angle brackets. However, one side effect of this viewpoint is that a rather disturbing amount of server code is still being written with HTML content (and often badly formed HTML at that) being written inline as successive lines of composed strings. For instance, it's not at all unusual to see inline PHP that looks something like:$buf ="<html><head><title>".$myTitle; $buf += "</title><body>"; $buf += "<h1>This is a test.</h1>"; $buf += "<p>If this were an actual emergency, we'd be out of here by now."; echo $buf;
Not surprisingly, with this particular approach, your ability to create modular code is virtually nil, the likelihood that you as the developer of this particular page will spend many late hours trying to figure out why your table fails to render properly after the twelfth row (and causes the browser to crash after the 200th) is correspondingly high, and maintaining it after three months well nigh impossible.
--Kurt Cagle
Read the rest in XML.com: XQuery, the Server Language
Unfortunately, SCO is not the only company that is attempting to use the specter of unsubstantiated intellectual property infringement allegations as a weapon against competitors. "Always two there are," it is said. SCO's litigation war chest was partially furnished by Microsoft, which provided SCO $16 million in UNIX licensing fees and helped SCO secure tens of millions more from Baystar Capital. As Microsoft continues to trumpet baseless and unsubstantiated patent infringement allegations in its war against Linux, the company should take a close look at the fall of SCO and ask itself if it wants to follow SCO down the same road to ruin. Once you start down the dark path, forever will it dominate your destiny. Microsoft obviously has more resources than SCO and could probably endure a protracted legal battle forever, but what would it ultimately accomplish?
--Ryan Paul
Read the rest in Requiem for a legal disaster: a retrospective analysis of SCO v. Novell: Page 4
E4X is what the DOM should have been within the ECMAScript environment. The DOM was way too “CORBA” oriented to be practical in ECMAScript.
--Didier PH Martin on the xml-dev mailing list, Wednesday, 17 Jan 2007 13:31:41.
--Suprnova.org
Read the rest in Suprnova.org relaunches, taunts The Powers That Be
I’m a legal academic and I woke up one day and thought, "Why can’t I get cases the same way I get stuff on Google?” People should be able to get cases easily. This is a big exception to the way information has opened up over the past decade.
--Tim Wu, Columbia Law School
Read the rest in A Quest to Get More Court Rulings Online, and Free
At the very least, browsers that render erroneous code should pop an error message saying "This page's code contains errors, but the rendering engine is going to take a guess and try to render it in a way that is readable."
Who would want *that* to pop up on their pages? It would force the lazy and ignorant to fix their pages, but it would still allow one to see the content of the badly-coded pages.
--David W. Fenton on the wwwac mailing list, Friday, 23 Mar 2007 08:53:12
Any number of times I have had an itch where scratching it involved JavaScript, so I'd google for "JavaScript tutorial". The top hits are full of suggestions to do things that no self-respecting software engineer should do:
- I know JavaScript has throw/catch; why do so many tutorials use alert()?
- Self-modifying code (document.write()) is an awfully big hammer; why does it show up in simple hello-world examples?
Worst of all, why do so many tutorials fail to cite whatever sources they are based on? They don't claim to be exhaustive or authoritative, so I expected a "for more details, see ..." link. No joy. For example, the w3schools javascript tutorial says to use text/javascript but the IETF spec deprecates that in favor of application/javascript.
--Dan Connolly
Read the rest in Notes on GRDDL/JavaScript Development
Java is object-oriented, XML is hierarchical, and relational databases are tabular. The mapping between these three different data models generates a lot of zero-value-added work in developing an application. When you’re XML top-to-bottom, poof, that work’s all gone.
--Dave Kellogg, CEO Mark Logic
Read the rest in Mark Logic CEO Blog: Web Applications: The Virtues of Top-to
The WS* stack is a morass of complexity - it's starting to make the CORBA boomlet of the early 90's look simple.
--James Robertson
Read the rest in WS* Barbarians at the Gate
Even after all this time, it amazes me how many poorly constructed websites there are out there.
Even more amazing is that the browsers display the code anyways.
--Ron Trenka on the WWWAC List mailing list, Friday, 23 Mar 2007 08:16:51
Oh, sure, they've made a few half-assed attempts to make IE standards-compliant, sort of, but only after making many full-assed attempts to distort those standards to give Microsoft competitive advantages. I've heard that directly from folks working on the relevant teams over there. Microsoft cheerfully shows up at the standards meetings to make damn sure they screw up the APIs for everyone else. You know. Microsoft-style. Sorta like how DirectX was bugly compared to OpenGL. Or Win32 compared to *nix. Or MFC compared to any sane object system (e.g. TurboPascal and TurboC). Or COM compared to CORBA. (I mean, you have to work hard to be worse than CORBA.) Microsoft has always been awful at making APIs, always always always, and I've decided over the years to credit this to malice rather than incompetence. Microsoft isn't incompetent, whatever else they might be. Burdened, yes; incompetent, no.
--Steve Yegge
Read the rest in Stevey's Blog Rants: Blogger's Block #3: Dreaming in Browser Swamp
if an XSD validator is even in the message path, no one turns it on, because it’s too computationally expensive, not completely implemented, and unable to perform all the validation required (e.g., date X should be no more than 30 days divergent from date Y).
All of this is not to say that a rigorous, machine-readable description of an XML message isn’t useful at development time and even runtime, it is. But the notion of assigning type to elements and attributes is unnecessary. After all, how many HTML forms have been processed successfully, and those are submitted as just name/value pairs. And even when a message description is available, using it to generate code that treats remote resources as local objects and messages as their serialization is counterproductive. XML is for representing structured information not serializing objects.
--Pete Lacey
Read the rest in InfoQ: Interview: Pete Lacey Criticizes Web Services
The best way to do a shopping cart RESTfully is to use standard mark-up to describe items that can be purchased and allow the user agent to "move" items from whatever page they happen to be looking at into their own browser's virtual cart. The mark-up can describe where to go for check-out, and the cart could contain items from many different merchants. In other words, all of the state remains on the client. The reason we don't do it that way now is partly because shops don't believe in waiting for standard media types to be updated, and partly because Netscape became gun-shy after the response to their early HTML extensions.
--Roy T. Fielding on the rest-discuss mailing list, Sunday, 26 Apr 2007 13:20:13
That's the problem with adding complexity to, well, anything -- there are always people who will rush to use the complex parts, just because they can.
--David Megginson on the xml-dev mailing list, Sunday, 18 Mar 2004.
--Rob Weir
Read the rest in An Antic Disposition: The Formula for Failure Bray on the atom-syntax mailing list, Tuesday, 28 Nov 2006 15:57:50.
--Mark Baker
Read the rest in Integrate This.
--Kurt Cagle
Read the rest in xforms vs. ruby - a rebuttal (sort of).
--Donald Norman
Read the rest in Don Norman's jnd.org / Human
Whale feces or working at Microsoft? I would probably be the whale feces researcher. Salt air and whale flatulence; what could go wrong?
--Michael Moyer, Popular Science
Read the rest in Macworld: News: Microsoft security group makes 'worst jobs' list
JSON is good at solving the particular problem of sending pairs of named/typed fields and their values, where the values can themselves (recursively) have that same structure.
XML is aimed at a much broader class of uses. For example, while one can niggle about the details, XHTML does a pretty good job of conveying HTML in the form of XML, including all the mixed content stuff like <p>My point is that this paragraph has <emph>mixed</emph> content, in which markup occurs within strings.</p> JSON doesn't even try to do that in a standard way. JSON also doesn't do a lot to support the distributed invention of cosmically-unique names, as namespaces do.
--Noah Mendelson on the xml-dev mailing list, Friday, 20 Jul 2007 09:55:44
Using JSON for anything else but server-to-browser communication is a mistake. Using anything else than JSON for server-to-browser communication is a mistake as well. In short, use the tool that fits the job and don't be indoctrinated by it.
--Steve Bjorg on the rest-discuss mailing list, Sunday, 10 Jun 2007 17:45:08.
For.
--Richard M. Stallman
Read the rest in GPLv3 - Transcript of Richard Stallman from the fifth international GPLv3 conference, Tokyo, Japan; 2006-11
My friend Ira, who lives in Yokohama, Japan, has 100-megabit-per-second fiber-optic Internet service in his home. This costs Ira less than $30 per month. What the heck is up with that? Ten years ago, the United States had the fastest and cheapest residential Internet service in the world. Today U.S. residential Internet service, especially broadband, is among the slowest and most expensive.
--Mark Stephens
Read the rest in I, Cringely . The Pulpit . When Elephants Dance | PBS
The consumer electronics companies really have their collective head so far up their ass they’re wearing their tongue for a hat.
--Adam Fields
Read the rest in Adam Fields (weblog) - » Why am I writing about HD home theater frustrations?
Most non-technical people I know pretty much live in their browsers, and they only emerge periodically to stare in puzzlement at iTunes or a game or something, and wonder why isn't it in the browser, because everything else useful seems to be. It's where the whole world is. To non-technical people, of course. Which is, like, practically everyone.
--Steve Yegge
Read the rest in Stevey's Blog Rants: Blogger's Block #3: Dreaming in Browser Swamp.
--Kurt Cagle on the XML Developers List mailing list, Sunday, Feb 2005 10:52:23
The CODASYL data model ultimately foundered because of its unwieldy links, and XLink foundered trying to do something similar for XML. Maybe the lesson here is that the relational model approach of defining links *dynamically* based on relationships on the *values* of information items rather than predefined links really is the way to do what XLink tried to do.
--Michael Champion on the xml-dev mailing list, Friday, 22 Oct 2004
the problem it seems to me with JSON is that it both has a beginning of a semantics, it has types for true, false, and numbers, and at the same time it does not have enough. The spec is completely at the syntactic level. The semantics it has come from it being so closely tied to JavaScript, which has a procedural semantics. Number refer to numbers because that's the way JavaScript will interpret them.
--Henry Story on the rest-discuss mailing list, Friday, 13 Jul 2007 12:27:59
You can’t install new programs from anyone but Apple; other companies can create only iPhone-tailored mini-programs on the Web. The browser can’t handle Java or Flash, which deprives you of millions of Web videos.
--David Pogue
Read the rest in The iPhone Matches Most of Its Hype
Less than 0.1% of the documents on the web actually conform to what's in their doctype declaration
--Dan Connolly on the www-tag mailing list, Monday, 16 Apr 2007 13:31:17
When we do information modelling we ask questions like "what is a flight?", "if a flight involves a stopover, is that one flight, two flights, or three?", "if an extra plane is laid on to handle extra demand, is that the same flight or a different flight?". I know how to tackle these questions within the confines of a closed system where we can agree the terms and what we mean by them. A smallish group of people can get together and decide on precise definitions of the terms they are using within a limited domain of discourse.
I simply don't believe that it can be done universally, and what worries me is that there seem to be people who think it can. What I mean by "flight" depends on the conversation I am having at the time, and calling it instead isn't going to change that. OK, we could define 120 different URIs to cover the different precise meanings of the word, but that would only reduce our ability to communicate with each other. There's a good reason why language is fuzzy and full of nuance: if it were possible to develop a precise and unambiguous and unchanging vocabulary we would have evolved one years ago. Deciding that every distinct concept is going to have a distinct URI is just simplistic: like tons of bricks or piles of sand, concepts are amorphous and lack clear identity.
--Michael Kay on the xsl-list mailing list, Monday, 12 Dec 2005 21:21:46
Processing content requires only that the recipient be able to understand it. Validation plays no role in that. At best it's just one way that a document recipient can identify content that might not be understood. But even without validation the content would certainly be found to be "invalid" eventually, as processing is attempted.
--Mark Baker on the www-tag mailing list, Tuesday, 3 Apr 2007 00:26:24
XML has gradually become rather a mess, and it is need of refactoring. Perhaps it really is time that as a community we started to think about doing it again, and doing it better next time. Personally, I suspect we haven't quite reached that point yet: the benefits of conformance are still too high. I'd give it another five years.
--Michael Kay on the xml-dev mailing list, Wednesday, 12 Oct 2005 16:28:31
Yes, everyone hates XSD. The thing is that XSD has become substantially *more* complicated in recent years, as Web Services have come along. Not only do people have to understand XSD, but their schemas may also partially be specified using wsdl:message syntax. (This lets you specify what the soapenv:Body element must contain, because the official SOAP schema is open. XSD's poor capabilities to support openness seem to be the root problem here.)
--Rick Jelliffe on the xml-dev mailing list, Tuesday, 28 Nov 2006 18:23:58
You know what I love about the GPL? Regular lawyers can't understand it. We've seen that over and over. I think it is so different from what they are used to, they can't get their heads around it, brainiacs though they may be. It seems unnatural to them, and I guess they can't believe it means what it says. But it means it.
--Pamela Jones, Groklaw
Read the rest in Groklaw.
--Tim Bray
Read the rest in ongoing · SOA and WCF
Firefox may be getting bloated, but it's still the fastest Windows browser, particularly for running Google web applications.
--Dylan Tweney
Read the rest in Compiler
What do whale-feces researchers, hazmat divers and employees of Microsoft Corp.’s Security Response Center have in common? They all made Popular Science magazine’s 2007 list of the absolute worst jobs in science.
--Robert McMillan, IDG News Service
Read the rest in Macworld: News: Microsoft security group makes 'worst jobs' list
computer programming is one of those fields where an immigrant who doesn’t speak English can still be a brilliant programmer.
--Joel Spolsky
Read the rest in Sorting Resumes
It seems pretty obvious to me that any server that changes the client-defined content of an entry (author clearly being one of those fields that cannot possibly be determined mechanically) is failing to follow the intent of the PUT, so if it returns 200 in that situation with the intention of changing the author then it is broken, both in terms of HTTP and Atom. That's broken, as in, violates the semantics of the data format regardless of who wrote the client -- it has broken operability, not interoperability.
--Roy T. Fielding on the atom-protocol mailing list, Wednesday, 14 Mar 2007 19:02:57
As a WordPress user, I’m amazed that every time I want to add a new feature, I just do a quick search and find a WordPress plug-in that does exactly what I need.
--Scott Karp
Read the rest in WordPress vs. Movable Type: Open Source Blogging Software Showdown » Publishing 2.0
Does anyone still believe that web services will be published and consumed indiscriminately on the open Internet? I keep on seeing references to that early vision as if it's still alive, but surely everyone realizes by now it was just a geek pipedream, the idea that your servers would just go out on the Internet and 'discover' services listed by all-comers in registries conforming to the pompously-named "Universal Description Discovery and Integration protocol" (ie UDDI).
--Phil Wainewright
Read the rest in Trust, contracts and UDDI - Loosely Coupled weblog, Nov 12th 2004 10:12am.
--Tim Bray
Read the rest in ongoing - I Like Pie
It may be too late to compete effectively with Flash. But if Desktop Java doesn’t make a stand here, then frankly, where is it going to assert its relevance? Webapps have eliminated many of the use-cases that applets once targeted, and distribution frustrations (among other factors) continue to make double-clickable Java desktop applications a tough sell. Ajax is all but the final insult: at the end of the day, script manipulating UI widgets in a browser isn’t that different than Java bytecodes manipulating AWT or Swing widgets… except for the fact that Ajax is infinitely more popular than any of the Java client technologies ever were.
--Chris Adamson
Read the rest in Rebooting Java Media, Act I: Setup
Because the standoff between Microsoft and the Forces of Neutrality (open standards and the like) is the main thing that's holding JavaScript back. Nobody wants to build an amazingly cool website that only works in FireFox/Opera/(insert your favorite reasonably standards-compliant browser here). Because they're focused on the short term, not the long term. It would only take one or two really killer apps for Mozilla to take back the market share from Microsoft. That, or a whole army of pretty good ones. People don't like downloading new stuff (in general), and they also don't like switching browsers. But they'll do it if they know they have to in order to use their favorite app.
Everyone knows all this; not a jot of it is news to anyone, but nobody wants to be the one to make a clean break from IE. It might bankrupt them. Great app, nobody sees it, company goes bust. So the killer apps will have to come from the fringe, the margin, the anarchy projects engineers do on the side — at least at companies where engineers have a little free time for innovation. Excepting only go-for-broke startups, most places can't (or won't) bet the farm on a Firefox-only application. So even though the spec is moving forward, or maybe sideways, DHTML in the real world has been in near-stasis for years.
--Steve Yegge
Read the rest in Stevey's Blog Rants: Blogger's Block #3: Dreaming in Browser Swamp
A resource is like an object, and a url is like a pointer to that object. If I have the same pointer, it will be the same object at the other end of that pointer. Over time the state of that object may change, so when I GET it i'll retreive different results. It still means the same thing.
--Benjamin Carlyle on the rest-discuss mailing list, Tuesday, 31 Oct 2006 08:42:20
a standard body is not only a cool place where friendly geeks meet, drink (sometimes) free beer, and write standards for the beauty of standards.
A standards body is a battlefield, where organizations want to push THEIR OWN competitive advantage, be the first one to blabla, the best one to blabla, where they hope to be THE solution's provider when multiple solutions are on the table because THEY can implement it before others.
--Daniel Glazman on the whatwg mailing list, Sunday, 11 Mar 2007 14:35:09 Spolsky
Read the rest in Apple Safari for Windows: The world's slowest web browser
DRM's sole purpose is to maximize revenues by minimizing your rights so that they can sell them back to you.
--Ken Fisher
Read the rest in Privately, Hollywood admits DRM isn't about piracy
Jean Paoli and Tom Robertson share a tear-jerking story on how Microsoft has "stepped up efforts" and "listened to customers." Microsoft "congratulates Ecma" for producing a 6,000-page specification that will "spark an explosion of innovation." The enemy, on the other hand, is using the "standards process to limit choice in the marketplace for ulterior commercial motives." Microsoft has the nerve to criticize competitors for having commercial motives?
--Håkon Wium Lie
Read the rest in Microsoft's amusing standards stance | Perspectives | CNET News.com
I don't, however, see much evidence that the "REST calculus" of GET/PUT/UPDATE/DELETE resources by transferring representations is actually used to model real applications of any complexity. Instead, people use GET for what it is obviously good for, and use POST as sortof a DoStuff() for everything else. In other words, most have learned to GET RESTfully, but just tunnel HTTP as shamelessly as any WS-* advocate for everything else.
--Michael Champion on the xml-dev mailing list, Sunday, 23 Feb 2006 13:55:54
You.
--Jens Alfke
Read the rest in Thought Palace » Blog Archive » In Which I Think About Java Again, But Only For A Moment
The costs of tinkering at the edges of XML far exceed the benefits, as the XML 1.1 fiasco demonstrates all too clearly.
--Michael Kay on the xml-dev mailing list, Friday, 18 Aug 2006 09:55:24
In practice, having two incompatible versions of XML also makes life difficult for applications; if you generate XML 1.1, other applications might not be able to read it, and if you accept XML 1.1 you can't safely save it as XML 1.0 as it might use forbidden characters in markup.
The easiest solution to all these problems is just pretend that XML 1.1 never happened, which is what most people seem to be doing. I think that it would be best if the W3C recognised that the attempt has failed, and that trying to push it further would be counterproductive.
--Michael Day on the xml mailing list, Friday, 15 Jun 2007 16:29:33
It's rare that a change to a spec makes everyone happy. I'm not sure we (W3C) made _anyone_ happy with XML 1.1, unfortunately.
--Liam R E Quin on the xml mailing list, Sunday, 14 Jun 2007 21:56:30
whenever mainstream media reports on a field I know something about, the errors are usually large and obvious. This makes me wonder about the fields I know little or nothing about, and leads me to believe that most reporters don't even qualify as generalists. The exceptions tend to be in narrow fields where you get truly passionate people - sports and movie/theater reviews, for instance.
What's happening with the web right now is that the minimal generalists of the media are being disintermediated as our sole sources of information - we can now hear from actual experts who can give us their opinions without "joe reporter" as the middle man. For obvious reasons, reporters dislike this trend, but that's the way it is. The carnage that's happening in the US newspaper business is the leading edge of that change-over, and it can't happen soon enough as far as I'm concerned.
--James Robertson
Read the rest in Professional Media Aren't
Massachusetts is mandating one standard to the exclusion of other specifications. But that’s good; you want to pick a single standard for a given purpose, to the exclusion of others, as long as all suppliers can implement the standard without unrelated restrictions. If we had two signal light standards, where red meant “stop” in one and “go” in another, we’d obviously have bad results. One of the biggest problems in information technology is that in some areas there are too many standards, instead of a single standard that everyone can agree on and use. Mandating a single standard for a given area is a very good thing, if all suppliers can implement the specification without legal, monetary, or other restrictions or discriminations. Massachusetts has really a strong case for selecting OpenDocument (the topic of this letter) as this standard
--David A. Wheeler
Read the rest in GROKLAW
Apple has managed to make it practical to view standard web pages on a 3.5 inch screen. I’ve thought from the beginning that the drastic compromises being made to wedge reduced-content web pages into current handheld devices was a interaction dead end, and I couldn’t be happier with the job Apple has done here. If Apple doesn’t carry over this technology into some kind of slate computer, they are not nearly as bright as I think they are.
--Bruce Tognazzini
Read the rest in The iPhone User Eperience: A First Look
Although an XForms plug-in is being implemented in Mozilla, the development of this plug-in takes longer than the birth of a human baby. Although standardized by the W3C as part of XHTML 2.0, XForms are widely ignored as "the next big thing".
--Adriaan de Jonge
Read the rest in XForms vs. Ruby on Rails.
--Jakob Nielsen
Read the rest in Digital Divide: The Three Stages (Jakob Nielsen's Alertbox)
I've found XOM to be easy to use and to have a far better usefulness-to-abstraction ratio than the other Java XML stuff I've tried using.
--Ilan Volow on the java-dev mailing list, Tuesday, 19 Jul 2005 14:02:26
There are already a lot of things you can do with mozilla that you just can't do with IE. 2007 could see Mozilla-only functionality being part of the next killer app. That's an internet with IE7 left behind.
--Didier PH Martin on the xml-dev mailing list, Wednesday, 17 Jan 2007 13:28:15
Failure is a matter of expectation. Is the Wiki format a failed technology? From the POV of sales, I am sure it it; from the POV of numbers using it, compared to Office or OpenOffice, I am sure it is; from the POV of its ability to be useful in creating Wikipedia-like things, it is obviously a roaring success (and Office and OpenOffice are failures).
--Rick Jelliffe on the xml-dev mailing list, Friday, 05 Jan 2007 21:24:47
Taking the SOAP 1.1 specification in isolation, my position on it is that it went too far. Had SOAP simply defined an envelope for XML message passing it would have been a small but interesting step forward. But the SOAP spec also defines an—admittedly optional—serialization mechanism; goes out of its way to be transport neutral, but then defines an HTTP binding that ignores the basic tenets of HTTP; and goes on to define a practice for using SOAP as an RPC mechanism. However, if one ignores the optional bits, SOAP itself isn’t that bad. The envelope design pattern can be useful.
--Pete Lacey
Read the rest in InfoQ: Interview: Pete Lacey Criticizes Web Services
Currently most web video seems to be H.263 in the Flash Video container, which is pretty lousy compared to all codecs discussed here.
--Maik Merten on the whatwg mailing list, Monday, 02 Apr 2007 20:50:40.
--Kurt Cagle
Read the rest in xforms vs. ruby - a rebuttal (sort
--Mark Baker
Read the rest in Integrate This
All you need to logout of an HTTP session is a UI for telling the browser to stop sending its cached credential, which was obvious to everyone except the browser developers. Likewise, if the browser displayed the HTTP message sent with a 401 response, then application developers could easily define their own custom login dialogs. That is all part of the design of URI+HTTP+HTML -- the only bit missing was one implementation to show all the others how to do it.
--Roy T. Fielding on the REST discuss mailing list, Monday, 2 Oct 2006 19:22:19
Forcing good coding serves the end user in the long run because it enforces discipline on the coders, even if they're "everyone and their younger cousins." The biggest offenders in the bad code department were many of the WYWIWYG HTML editors that didn't bother to validate the code they were creating, and since the browsers were forgiving "everyone and their younger cousins" thought everything was OK. Had the browsers complained, they would have found better WYSIWYG tools, and the makers of the WYSIWYG HTML editors would have fixed their editors to produce valid HTML.
--David W. Fenton on the wwwac mailing list, Friday, 23 Mar 2007 12:20:59.
--Clay Shirky
Read the rest in webservices.xml.com: Web Services: It's So Crazy, It Just Might Not Work.
--Paul Graham
Read the rest in Microsoft is Dead
Read the rest in Wired News: Refuse to be Terrorized
Read the rest in Position Paper For the Workshop on Web of Services for Enterprise Computing
In all cases of which I'm aware, data on the web that's served as */xml is a symptom of a bug, and it is not OK for agents, web robots or any other kind, to infer #fragid rules.
--Tim Bray on the xml-dev mailing list, Sat, 20 Nov 2004?
--Tahir Hashmi on the xml-dev mailing list, Saturday, 08 Jul 2006 11:48:38
REST has been around for two decades. REST is the sum of practices that have worked on the web.
The term “REST” is new and indeed hyped, but REST is old and proven.
--A. Pagaltzis on the rest-discuss mailing list, Tuesday, 22 May 2007 16:16:53
This is the tragedy of the internet commons: we insist on building systems that assume that everyone will buy into the commons and work together, and then we're terribly put out when we hit that tipping point and the people who only see the commons as a resource to mine show up and we've not only forgotten to hire police, we don't have locks on gates -- because we forgot to build gates, and fences.
We did this first with SMTP and email; we're still trying to put that genie back in that bottle, but I think it's going to happen (and it's one reason I went to StrongMail, because they have a commitment to work with and drive standards to make it happen). Then we did it with USENET, but back then, the net was small enough we could still pretend it WAS a commons we'd all work for. But as it all grew, we started to see the problems, and believe me, a lot of good, intellgent and earnest people burnt out trying to figure out how to solve the problems that were created by making naive assumptions of trust in the design of USENET.
--Chuq Von Rospach
Read the rest in Chuqui 3.0: KATHY SIERRA: A history lesson from Usenet
It's astonishing how much effort goes into creating usability hell.
--Benjamin Hawkes-Lewis on the whatwg mailing list, Sunday, 18 Mar 2007 16:34:01
GET is safe because it's defined to be safe. The server can do whatever it wants in response to receiving a GET message, but the important thing is that the both parties (and intermediaries) understand that the client isn't *asking* for unsafe stuff to happen and so can't be held accountable.
--Mark Baker on the rest-discuss mailing list, Friday, 13 Apr 2007 08:05:53
The ability to create custom data models is an anti-feature that makes integration between different computer systems impossible because it assumes that those systems can actually understand the data. Computer systems have no such intelligence - they only understand what someone has programmed them to understand. To hit the sweet spot you must come up with a standard, simple format that every system can use.
--Charlie Savage
Read the rest in Lost in Abstraction.
--Doc Searls
Read the rest in The Doc Searls Weblog : Saturday, March 24, 2007
The thing we've missed by not having anonymous posting, however, is the wide range of opinions and perspectives that first-time and very occasional visitors can bring to a discussion. While we still have some good conversations, they haven't nearly as lively since.
--Ed Foster
Read the rest in Ed Foster's Gripelog || Anonymous Posting Returns, I Hope
XML as specified gives the DTDs contained in a document absolute authority - - conformant processors which check validity at all MUST check it against the DTD in the document -- i.e. producers/authors determine. This was a mistake. XML Schema allows producers/authors to specify the schema to use, but also allows consumers/readers to override that specification -- but crucially, if they choose not to override, conformant processors use what the producers specified.
CSS1 allowed authors to mark a rule as 'important' -- conformant user agents MUST treat an producer/author's important rule as determining. This was a mistake. CSS2 introduced '!important' to allow the consumer/reader to override. Again, crucially, if consumers choose not to override producers' choices must be followed.
What's important here is that consumers' wishes are paramount, but it _is_ none-the-less possible for producers to state their wishes as well.
--Henry S. Thompson on the www-tag mailing list, Saturday, 31 Mar 2007 14:09:23
in REST, the enemy of GET is the proxy server that thinks it knows better. The one that returns 200+text/html when the far end 401s on you. The one that caches stuff for weeks, even when the TTL is seconds. The one that caches an incomplete download and serves up to other callers.
--Steve Loughran on the rest-discuss mailing list, Friday, 4 May 2007 11:01:16
The real issue here is that a very significant number of authors want to be able to say "make this bit of text Arial, 18pt" and the HTML spec doesn't want them to do that (for good reason).
--Adrian Sutton on the whatwg mailing list, Tuesday, 1 May 2007 19:00:53.
--Adriaan de Jonge
Read the rest in XForms vs. Ruby on Rails.
--Kurt Cagle
Read the rest in xforms vs. ruby - a rebuttal (sort of)
there were three factions in SGML: those who used OmniMark, those who used SGMLS or NSGMLS, and those who had to roll their own tools. While people in the first two factions certainly sometimes normalized their data into fully-unminimized forms, it really was the roll-your-own crowd, notably browser makers, who needed something simpler than SGML.
--Rick Jelliffe on the xml-dev mailing list, Wednesday, 7 Jun 2006 03:34:13 +1000
the proper way to deal with bad data is ALWAYS to try first and fix it at source. If you say this isn't an option, then I would want to know why. XML is an immensely valuable interchange format because it is so widely supported. The value comes both to the sender and the recipient. Generating something that is almost XML but not quite loses all this value, you might as well generate something that is completely proprietary. If your enterprise systems are producing incorrect XML, then every consumer of that data is going to incur large extra expense because they can't use standard off-the-shelf software to process it.
--Michael Kay on the saxon-help mailing list, Sunday, 11 Jan 2007 23:25:45
CSS layout is like one of those games where you slide 15 tiles around in a 16-square matrix. In principle it is a declarative language, but in practice the techniques are highly procedural: Step 1, Step 2, etc.
--Jon Udell
Read the rest in Matthew Levine's holy grail « Jon Udell
comma-delimited ASCII doesn't work just as well.
First: comma-delimited. What if the fields contain commas? Or newlines? They need to be quoted (and the developers need to know that they need to be quoted), which means you've already got an interop problem, namely, which of the half-dozen flavors of CSV are you going to use?
Second: ASCII. 'Nuf said.
Third, and most important, is the shape of the data. Not everything fits in a list of homogeneous records, which is CSV's the natural shape. Of course you can wedge data that isn't shaped like an N by M table into a CSV file, but then you have to devise your own encoding scheme.
--Joe English on the xml-dev mailing list, Wednesday, 01 Jun 2005 13:20:25
It's depressing to think that SOAP started just about 10 years ago and that now that everything is said and done, we built RPC again.
--Tim Ewald
Read the rest in I finally get REST. Wow.
If.
--Kevin Bondelli
Read the rest in KevinBondelli.com
I don’t have any money, but this will get you some great exposure! Heard this before, and 9 times out of 10, that’s complete and utter bullshit. Unless you have some hard evidence that your client’s project will, without a doubt, succeed, then don’t give in to this kind of ploy. Client’s that tell you this are generally just looking for some free design work and aren’t worth your time.
--Tyler Lemieux
Read the rest in DesignersMind » Blog Archive » Five reasons to turn down a potential client
SVG is far superior to anything Flash offers in drawing options. Especially because it's scriptable while Flash shapes are not.
--Benjamin Otte
Read the rest in Blog for company
use XHTML 1.0 transitional instead of HTML 4.01. With XHTML it's always clear where the error really is, with HTML 4.01 you've to know the spec. by heart, and then no browser supports it.
--Frank Ellermann on the www-validator mailing list, Sunday, Tue, 24 Apr 2007 22:41:35
HTTP auth is every bit as stateful as cookie auth -- in both cases the state is managed in the client. The only real difference is that the client doesn't know what it is managing when it comes to cookie auth, and thus ends up leaking security credentials all over the place.
--Roy T. Fielding on the REST Discuss mailing list, Tuesday, 3 Oct 2006 13:56:07
Nearly every advance the web has taken is because of hand coders. The first people to use tables to be able to take a photoshop comp and accurately display it on the web, the people using CSS to do the same, AJAX, Dojo, etc. all were hand coders who looked at the code and saw new ways to manipulate it to get the next big thing.
I can't think of anything new that came from someone who just knew how to use Visual Studio or Dreamweaver.
--Ron Trenka on the WWWAC List mailing list
In the progression of web application developers, there is a phase during which the developer has the power to build something really useful but lacks the ability to create useful URLs. This phase is dangerous because if the application becomes widely deployed, you're likely to be stuck with a namespace that complements its lack of expressibility with an overabundance of complexity.
--Derrick Pallas
Read the rest in Back That URL Up
Google? It’s like Fight Club. The first rule about Google is you don’t talk about Google. And the second rule about Google is you don’t talk about Google. Now that’s kind of secretive. But fun -- a lot of fun.
--Jeremy Allison:
Read the rest in Information Architecture > Service Oriented Architectures > Novell
It's hard to find real cases where hardware acceleration (or binary XML) makes sense for application work. If you're doing anything at all that's non-trivial (building an in-memory tree, sending transactions to a database, rendering into PDF, etc.) actual parsing is going to account for 1% or less (often much less) of total running time. That means that even if you a silver bullet that speeds up XML parsing by an order of magnitude, you'll be seeing less than a 1% speed improvement in your overall app
--David Megginson on the xml-dev mailing list, Friday, 23 Feb 2007 10:10:02.
--Kurt Cagle
Read the rest in xforms vs. ruby - a rebuttal (sort of)
I remember when DoubleClick and third party cookies were "the big problem". It appears that Google will now be well beyond what DoubleClick could ever have hoped in terms of privacy violation.
--Andrew Gideon on the wwwac mailing list, Sunday, Sat, 14 Apr 2007 09:58:05
Learning WSDL is an exercise best reserved for the residents of the Fifth Circle of Hell
--Ted Neward
Read the rest in The ServerSide Interoperability Blog » Contract-First or Code-First Design.
--Scot Finnie
Read the rest in Windows expert to Redmond: Buh
XML caught on because people liked the idea of separating information content from presentation, and XLink never recognized that. XML says you can use any names you like for your objects and their properties, but XLink says you have to call your relationships xlink:href.
In practice people are storing information in XML form, and using XSLT to transform it into presentation formats like HTML and PDF (via XSL-FO). If you do that, you can model your relationships any way you like, and give them names that make sense. XLink just doesn't add value in that scenario.
--Michael Kay on the xml-dev mailing list, Saturday, 2 Apr 2005 10:08:19
XML is on the visible web. Some won't look.
--Claude L (Len) Bullard, on the xml-dev mailing list, Monday, 24 Jul 2006 08:49:18
If you know WSDL (and I do) and you can study the source code closely (which I did) you could kind-of figure out how to use each new web service API. But, that's a lot of work. Too much work if you ask me. It made me understand why Google, eBay, and Amazom.com all provided their own proprietary toolkits for Java, C and C#: These web services were complex and demanded not only a straightforward API but plenty of documentation to back them up. With JAX-WS you get neither of those things.
--Richard Monson-Haefel
Read the rest in I, Analyst: Redeemed! JAX..
--Mark Baker
Read the rest in Integrate This»Blog Archive » Two more reasons why validation is still harmful
XForms is more elegant than pragmatic. It is a solution designed on paper instead of extracting it out of the real world. It is designed to solve common real world problems in a limited scope and fails to further evolve from implementation experience. The XForms concept was born from a vision but it is being implemented like a mandatory school assignment. The key advantages of XForms have an academic nature. This is the exact reason why I like them myself. It is also the exact reason why it fails in the real world. History teaches us that all famous theories, models and ideas in any possible science, have one thing in common. Simplicity! The complexity of the XForms specification grew out of proportions. No amount of pragmatism is able to fix that once the harm is done.
--Adriaan de Jonge
Read the rest in XForms vs. Ruby on Rails
I've decided weblogs are to this decade as editors were to the 1970s. You have to write your own. It's a pretty thin rationale - the 1970s more or less sucked as I recall*.
--Bill de hÓra
Read the rest in Bill de hÓra: Journal Migration I: export entries from
Mainly, though, W3C’s site houses a collection of proposals, drafts, and Recommendations, written by geeks for geeks. And when I say geeks, I don’t mean ordinary web professionals like you and me. I mean geeks who make the rest of us look like Grandma on the first day She’s Got Mail.™
--Jeffrey Zeldman
Read the rest in A List Apart: Articles: Fix Your Site With the Right DOCTYPE!
We can make things much easier if the people writing the software would just let go a little more and stop demanding that all the information go in some opaque (to the web) backend data silo (i.e., a database).
--Ian Bicking
Read the rest in An Easier Legacy.
That means that other products and software, in practice, will NOT be able to understand arbitrary Open XML that might be thrown at them. There is just too much. Therefore they will only create a bit that they need and send that off. Send it off to whom? The only software that might understand it, namely Microsoft Office.
So this is how I see this playing out: Open XML will be nearly fully read and written by Microsoft products, but only written in subset form by other software. This means that data in Open XML form will be largely sucked into the Microsoft ecosystem but very little will escape for full and practical use elsewhere.
--Bob Sutor
Read the rest in Bob Sutor: Open Blog | Is Open XML a one way specification for most people?
XML Schema was too much, too late.
--Peter Hunsberger on the xml-dev mailing list, Sunday, 25 Jan 2007 08:52:46
ODF is an XML-based dump of the internal data structures of OpenOffice, while OOXML is an XML-based dump of the internal data structures of Microsoft Office.
--Håkon Wium Lie
Read the rest in Microsoft's amusing standards stance | Perspectives | CNET News.com
Stop giving away the news and charging for the olds. Okay, give away the news, if you have to, on your website. There's advertising money there. But please, open up the archives. Stop putting tomorrow's fishwrap behind paywalls. (Dean Landsman was the first to call this a "fishwrap fee".). (This point is proven by Santa Barbara vs. Fort Myers, both with papers called News-Press, one with contents behind a paywall and the other wide.)
--Doc Searls
Read the rest in The Doc Searls Weblog : Saturday, March 24, 2007
HTML started simply, with structured markup, no licensing requirements, and the ability to link to anything. More than anything, this simplicity and openness has led to its tremendous and continued success,
--Tim Berners-Lee
Read the rest in W3C Relaunches HTML Activity
The Point of CSS is to use clean, simple HTML in your page, then write CSS “rules” that style the objects on your page. The page stays clean and looks cool, and your HTML page works on both mobile devices and regular browsers. That’s the point of CSS.
But The Art of CSS is quickly and easily referring to the right objects in your page from your CSS rules. The act of matching CSS rules to HTML tags is like a conversation: both sides need to be clear and in sync with each other, or they’ll talk over each other and you’ll get a headache from all the yelling.
--John Manoogian III
Read the rest in John Manoogian III » Blog Archive » (The Only) Ten Things To Know About CSS
the REST debate has dope-slapped a lot of people who thought we should/would write new interfaces for every GetRandomThing() operation they might conceptualize. That is a VERY Good Thing.
--Michael Champion on the xml-dev mailing list, Sunday, 23 Feb 2006 13:55:54
One downside of using HTML is that errors in the document can cause odd behaviour and can be harder to track down than errors in XML/XHTML.
--Michael Day on the WHAT WG List mailing list, Sunday, 08 Mar 2007 15:04:20
XML still isn't likely to change the Web much on the client side, beyond the role it plays in Ajax and related technologies. (Even that role is likely to be reduced by JSON.) The dreams of XML hypertext are dead, or at least thoroughly dormant.
--Simon St. Laurent
Read the rest in XML.com: The XQuery Chimera Takes Center Stage
>
--David Megginson
Read the rest in Megginson Technologies: Quoderat
The poor uptake of XLink suggests there is some funny dynamic at play hindering linking in general, independent of the technical excellence of the solutions.
--Rick Jelliffe on the xml-dev mailing list, Wednesday, 7 Jun 2006 03:44:21 +1000
SQL/XML hits the sweet spot for me. Currently I need to interrogate a single relational schema in order to generate XML, which can later be styled to whatever output format is required using XSLT. SQL/XML is ideal - it doesn't require much more stuff to learn - just a bunch of XML generating primitives layered on top of nested queries, which is reasonably straightforward, although I find I easily get lost in the bracketing!
--John Watson on the xml-dev mailing list, Thu, 11 Nov 2004
For years, there are two messages that have consistently come from Microsoft - "We don't want to change the browser because we are afraid of angering our customers" and "The W3C has taken too long to get any standards work done, and the free-market approach that we espouse work better because we're more reactive to our customers." I think they are mutually exclusive. Most of the core work within the W3C was done between 1998 and 2003; in some cases there are second generation iterations of technologies that have been around for less time than IE went through any significant upgrade. Yet Microsoft has done almost nothing to work with these, has implemented those standards that it had a direct hand in (XSD Schema, which is a mess) and largely ignored those standards that it didn't.
--Kurt Cagle on the xml-dev mailing list, Saturday, 20 Jan 2007 16:28:26
How is it that professional web designers can go through all the motions of site design, from "branding" to Information Architecture, all the way down through wireframing and compositions, and NO one is capable of saying "Hey lads, no one can read that font, it's too small". No even thinks of saying "Lads, blue on blue probably isn't a good idea".
--Des Traynor
Read the rest in What the large font giveth, the small font taketh away
What we have today with XML is uniform structure, which has enough of an infrastructure that you have xml editors, xsl engines, xpath viewers, all of which use that structure. These tools help embed XML into the world (people expect XML formats for everything) while not actually helping users once you get beyond the structural basics, unless they are knee-deep in application specific code (i.e. Ant-aware editors that know about property settings, targets, <import> etc)
--Steve Loughran on the rest-discuss mailing list, Sunday, 25 Feb 2007 17:03:09?
Probably not when the extremes are that broad, but you can be sure that the local politicians will cave on this, and we can forget free municipal Wi-Fi and Skype phones. Free is, by definition, communist! And it hurts free enterprise!
Who needs progress when you have profits?
--John C. Dvorak
Read the rest in The Killing of Wi-Fi : The Threat of Wi-Fi.
--Aristotle Pagaltzis
Read the rest in XML 2.0: XML with graceful error handling?
Wordpress is written in PHP 4. It can not benefit from the best Tidylib, the real DOM-extension, XML Reader, XML Writer or Simple XML. All those extensions require PHP 5. Nor can Wordpress do professional interaction with the DBMS through mysqli or PDO, since those also require PHP 5. Wordpress may be spectacularly successful, but IMHO it's design is a really crappy one. It was at the right time at the right place, but it is not a good example of how to do an enterprise PHP application.
--Keryx Web on the whatwg mailing list, Saturday, 17 Feb 2007 18:31:27
If there are many URIs for a given resource, the best implementation is for all of the other URIs to redirect to the one URI that is deemed to be "best" for the resource's unique semantics. The reason for that is not REST or Web Architecture (though both are specifically designed to enable it): the reason is network economics as expressed by power laws, Metcalfe's law, PageRank, and a hundred other restatements of the factors that place value on social networks.
--Roy T. Fielding on the rest-discuss mailing list, Sunday, 4 Jan 2007 14:16:58
anyone trying to process XML without using a proper XML parser is creating a pending disaster for themselves.
--Michael Kay on the jdom-interest mailing list, Wednesday, 22 Nov 2006 23:58:15
We must accept the other fellow's religion, but only in the sense and to the extent that we respect his theory that his wife is beautiful and his children smart.
--HL Mencken
Read the rest in Faith | Special reports | Guardian Unlimited.
--Keith Olbermann
Hear the rest in Special Comment: Secretary Rice, Get Your Facts.
--Ron Garrett
Read the rest in Rondam Ramblings: Top ten geek business myths
I suspect the only thing with a hope of dissuading UAs from doctype sniffing is sternly telling them that they SHOULD implement doctype sniffing. ;)
--Benjamin Hawkes-Lewis on the whatwg mailing list, Monday, 12 Feb 2007 12:09:03.
--Matt Taibbi
Read the rest in AlterNet: Maybe We Deserve to Be Ripped Off By Bush's Billionaires
If you have to write new client-side software to deal with your new server-side software, you have failed the REST test. A generic browser with sufficient knowledge of standard verbs and content types should be able to access and interact with your RESTful service. Special-purpose clients can still be written, but they should never be required.
--Benjamin Carlyle on the rest-discuss mailing list, Sunday, 01 Oct 2006 16:32:00
DTDs are not hard to learn, are very effective for tool users, and for many purposes, all one needs. That people rely on the XML specification over the myriad competing applications of XML is not that surprising. When something works, stop.
--Claude L (Len) Bullard on the xml-dev mailing list, Monday, 1 Nov 2004
Opera, Safari, Konquerer and Mozilla all support SVG. Microsoft uses VML, which is poorly documented, poorly implemented, has strong dependencies on Microsoft libraries and has been "frozen" since the mid-1990s. When Adobe's SVG plugin support expires in 2008, how many people who are depending upon SVG for their applications (such as the city of Toronto, not a small customer by any means) will just decide to refactor their applications to work on Mozilla and jettison their reliance on IE. When XForms support becomes integrated into Mozilla as part of the core suite (mid-summer 2007) how many customers of Infopath will start to see this as a commercially viable alternative? Or when Opera and Safari follow suit? How many web developers will just say "to hell with it" because IE's JavaScript support (excuse me, JScript support) doesn't even bother to support getters or setters, and the costs of maintaining two code bases gets to be too onerous. No, the lagging users are still firmly in the MIcrosoft camp, but the leading edge has been dropping IE in favor of alternatives at a far higher rate than the ones that are going the other way.
--Kurt Cagle on the xml-dev mailing list, Saturday, 20 Jan 2007 16:28:26
Since I am (among other things) a PHP developer I know all there is to know about sloppy coding... ;-)
--Keryx Web on the whatwg mailing list, Saturday, 17 Feb 2007 18:31:27
I believe we're making zero progress in computer security, and have been making zero progress for quite some time. Consider this: it's 2005 and people still get viruses. How much progress are we making, really? If we can't get a handle on relatively simple problems such as controlled execution and filesystem/kernel permissions, how much progress are we going to make on the really hard problems of security, such as dealing with transitive trust? It's 2005, and IT managers still don't seem to know how to build networks that don't collapse when a worm gets loose on them. Security thinkers realized back in the early 80's that networks were a good medium for attack propagation and that networks would need to be broken into separate security domains with gateways between them. None of this is rocket science - I think that what we're seeing today is the results of this massive exuberance in the late 1990's in which everyone rushed to put all their mission critical assets onto these poorly protected networks that they then hooked to the Internet. That was a dumb idea, and that fact just hasn't sunk in, yet.
--Marcus Ranum
Read the rest in Interview with Marcus Ranum
I have nothing good to say about the WSDL 1.1 specification (I’m sticking with the 1.1 specs as those are still the WS-I recommended versions). It is overly complex, often ambiguous, and occasionally inconsistent. In practice, tool-generated WSDL documents are nightmarish to read and the source of half of all interoperability issues (and I’m not referring to any XML Schema components). It’s also my position that WSDL is being used as a crutch by web service vendors and developers, even though the functionality it provides should be wholly unnecessary.
--Pete Lacey
Read the rest in InfoQ: Interview: Pete Lacey Criticizes Web Services
Casc.
--Alex Papadimoulis
Read the rest in The Daily WTF
I really don't know whether we'll be printing the Times in five years, and you know what? I don't care either,
--Arthur Ochs Sulzberger
Read the rest in NY Times publisher: Our goal is to manage the transition from print to internet - Haaretz
I would normally expect 512Mb to be sufficient to process an 80Mb source file - but not with much to spare! But it depends greatly on (a) the nature of the file (number of nodes matters more than number of bytes), and (b) the processing that the stylesheet is actually doing.
--Michael Kay on the saxon-help mailing list, Wed, 2 Feb 2005 16:31:48 -0000
I don't ever recall any version of DRM that didn't at least attempt to keep me from doing legal and useful things with whatever it was the DRM crudware was allegedly protecting. With PDFs it's typically trying to keep me from printing, and from using copy and paste. And that's just when the DRM isn't so broken that I can't open the fripping thing in the first place without stripping out the DRM.
--John Levine on the cpb mailing list, Sunday, Jan 2007 04:26:26.
--Blake Ross
Read the rest in Interview with Firefox Founder and Creator Blake Ross » Opera Watch
Resources are an abstraction -- a source of goodness as perceived by the person who linked to that resource that is in the form of a value-giver over time. There are no resources on the Web -- only senders and receivers of representations that have the effect of evaluating a resource mapping at invocation time, thereby becoming "the resource" as we perceive it over time even though we all know it is just a finite data server at any single point in time.
--Roy T. Fielding on the rest-discuss mailing list, Sunday, 4 Jan 2007 14:16:58
Read the rest in Apple
We seem to have sunk to a kind of playground system of forming contracts. Tag, you agree! Lawyers will tell you that you can form a binding agreement just by following a link, stepping into a store, buying a product, or receiving an email. By standing there, shaking your head, and shouting "NO NO NO I DO NOT AGREE," you agree to let the other guy come over to your house, clean out your fridge, wear your underwear and make some long-distance calls.
For example, if you buy a downloadable movie from Amazon Unbox, you agree to let them install spyware on your computer, delete any file they don't like on your hard-drive, and cancel your viewing privileges for any reason. Of course, it goes without saying that Amazon reserves the right to modify the agreement at any time.
--Cory Doctorow
Read the rest in Shrinkwrap Licenses: An Epidemic Of Lawsuits Waiting To Happen
Deep down inside every software developer, there's a budding graphic designer waiting to get out. And if you let that happen, you're in trouble. Or at least your users will be, anyway
--Jeff Atwood
Read the rest in Coding Horror: This Is What Happens When You Let Developers Create UI
Snap's preview anywhere gizmo is ruining the reading experience for millions of people. Its intrusive, obstructive and unuseful.
--Nick Wilson
Read the rest in 3 Reasons Why Snap Preview is Ruining Your Blog, and Hurting Your Readership | Performancing.com.
--Rob Weir
Read the rest in An Antic Disposition: How to hire Guillaume Port.
--Tim Bray
Read the rest in ongoing · JSON and XML
Back in '96-'97, me and a group of people, many of whom are here at Google, helped build stuff that these days is called AJAX. wandering around in a fire wearing matches, but we concluded we should go and build this thing. And we put all this stuff together so people could build thin-client applications.
--Adam Bosworth
Read the rest in Google's Bosworth: Why AJAX Failed (Then Succeeded)
All code is terminal. It lives under a death sentence. Data is immortal.
--Ken Downs
New York PHP user's group, 2006-10-24
Parsing XML should be pretty trivial, except for the annoying internal subset part. You basically have an
inputstreamwhich gives characters to the
tokenizerand the
parserchecks if the tokens come in the right order and create a “correct tree” and all that. So all pretty straightforward. Parsing HTML is different. You still have an
inputstream. You still have a
tokenizerstage, but it’s more complicated. It has to deal with error handling, but also with input from the
parserstage as some states within
tokenizerdo different things depending on which token has just been emitted. For instance, after you have encountered a
scriptelement start tag you have to consume characters and append them to the element until you see a
scriptelement closing tag (basically some lookahead handling after
</). You also can’t simply tell a
treebuilderthat a start tag has been seen. Sometimes you need to insert elements directly before the last opened
tableelement for instance (simply said). James is going to look into building some type of API on top of the parser so that everyone can implement that API and produce the tree he or she needs, such as
ElementTree.
--Anne van Kesteren
Read the rest in Project html5lib: performance
Go Daddy reserves the right at all times to disclose any information as Go Daddy deems necessary to satisfy any applicable law, regulation, legal process or governmental request, or to edit, refuse to post or to remove any information or materials, in whole or in part, in Go Daddy's sole discretion. Terms of Service
Read the rest in Low cost domain names, domain transfers, web hosting, email accounts, and so much more.
Read the rest in On Being and Deliciousness, with Wil Shipley
The "LISP could do everything XML does 30 years ago" argument is not new either. It is beside the point, however: for whatever reason, XML has become establisned as a ubiquitous standard and has got the network effect working in its favor, the other possibilities such as LISP and ASN.1 never did. Maybe we would have been better off if they had, but we'll never know.
--Michael Champion on the xml-dev mailing list, Saturday, 4 Dec 2004
If you are interested in a W3C technology, don't leave it to the last moment to find out what is happening.
--Steven Pemberton
Read the rest in Mozilla Firefox
what C++ did to C, is what XSLT 2.0 has done to XSLT 1.0.
--Mukul Gandhi on the xsl-list mailing list, Wednesday, 24 Jan 2007 00:00:39
The most unfortunate aspect of the show was the lack of wi-fi on the first day. Bloggers will forgive just about anything except bad wi-fi.
--hugh macleod
Read the rest in gapingvoid: "cartoons drawn on the back of business cards": le web 3
To dwell a bit more on XML Schema, I have to say that it is a deeply flawed specification. Not only is it notoriously complex and inconsistently implemented, it is fundamentally incapable of representing textual XML documents, as opposed to XML documents representing typed data. And even for representing data it leaves much to be desired. Burton Group is soon to publish a best practices document I wrote for creating interoperable, data-oriented schemas, but I can summarize it here: don’t use anonymous types, don’t use element groups, don’t use attribute groups, don’t use redefine, don’t use “any” elements, don’t use anyAttribute, don’t use anyType, don’t use lists, don’t use unions, don’t use substitution groups, and so on. So, not only is the ability to create typed instance documents of dubious value, XML Schema isn’t particularly good at doing it. If you want to use a schema language, use RelaxNG or Schematron instead.
--Pete Lacey
Read the rest in InfoQ: Interview: Pete Lacey Criticizes Web Services
--l.m.orchard
Read the rest in 0xDECAFBAD » do not taunt happy fun JSON
It is becoming very obvious what will happen over the next two to three.
--Mark Stephens
Read the rest in I, Cringely . The Pulpit . When Being a Verb is Not Enough | PBS
If there's anything we can learn from the mess that is RSS, at a certain point feed consumers should be allowed to say simply that a buggy feed is a buggy feed and that it falls on the responsibility of the feed publisher to get things right.
--James M Snell on the atom-syntax mailing list, Monday, 01 Jan 2007 16:00:09
My own failure to convince Zawinski that a GPL dual-license was a good thing for Mozilla still smarts; it meant that for the first couple of years of the Mozilla project (until dual-licensing took place, after Zawinski quit), Gnome developers were shut out completely.
--Roland Turner
Read the rest in Armadillo Reticence: Sun, Java and GPLv2
The distinction between operations and tasks is important in application design because the goal is to optimize the user interface for task performance, rather than sub-optimize it for individual operations. For example, Judy Olson and Erik Nilsen wrote a classic paper comparing two user interfaces for large data tables. One interface offered many more features for table manipulation and each feature decreased task-performance time in specific circumstances. The other design lacked these optimized features and was thus slower to operate under the specific conditions addressed by the first design's special features.
So, which of these two designs was faster to use? The one with the fewest features. For each operation, the planning time was 2.9 seconds in the stripped-down design and 4.6 seconds in the feature-rich design. With more choices, it takes more time to make a decision on which one to use. The extra 1.7 seconds required to consider the richer feature set consumed more time than users saved by executing faster operations.
--Jakob Nielsen
Read the rest in Productivity and Screen Size (Jakob Nielsen's Alertbox)
Please don't insult users with advertised prices that presuppose successful submission of a rebate form. It is not difficult to fill these forms out; but the error rate among rebate processing places is high enough to make it a lot of extra effort. When your customers have to sue either you or your supplier to get their rebates, they don't come back. Just advertise the amount you'll actually charge to the customer's credit card. If you can't work out a deal with the vendor offering the rebate, just drop it. It's better for everybody.
--Peter Seebach
Read the rest in The cranky user: Ho ho hum online retailers.
--Oliver Day
Read the rest in Analysis of Microsoft's Suicide Note (part 1) — BadVista
REST works because it makes absolutely no attempt to understand what the resource is, how it might be implemented, or the scope of how it will change over time. It eliminates the semantic burden of understanding by focusing only on the interface as a means of hiding knowledge from the other side, yet communicating all that needs to be said in the same way that two people communicate -- tossing representations across the gap with a relatively small number of pitch inflections to indicate what is expected in return. In short, REST doesn't care what the resource is or how many URIs identify the same resource, because to care would require understanding that would lead to coupling which is more dangerous than inefficiency.
--Roy T. Fielding on the rest-discuss mailing list, Sunday, 4 Jan 2007 14:16:58.
--Nicholas Carr
Read the rest in Rough Type: Nicholas Carr's Blog: Steve's devices
XML has never worked neatly with the heart of most web applications' architecture, the relational database. XML's hierarchical structures map poorly to relational database structures. You can, of course, create table- and record-like documents that fit easily with relational databases, but that's a fairly tiny if important subset of XML possibilities and documents.
Web applications built on relational databases can and do use XML, of course. Applications routinely generate XML from query results, and import XML documents by shredding them into pieces spread across tables. The more complicated the document, the more likely that multiple tables will be involved, or that it will prove easier to store the XML as a BLOB or a separate file.
--Simon St. Laurent
Read the rest in XML.com: The XQuery Chimera Takes Center Stage.
--Eric S. Raymond
Read the rest in World Domination 201
Non-dinosaurs may be surprised to learn that SGML's earliest, near-fatal challenger was not formats, but WYSIWYG. Old word processors (troff, Word Perfect, TeX, etc) all allowed you to play with tags; even the editors with presentation preview modes allowed you to edit the tags. Then WYSIWYG came along (with bastardized version of Ben Schneiderman's "direct manipulation" ideas) and the push was on for hiding tags both on-screen and in binary data formats, and against batch processing and transformation. SGML fitted into the UNIX pipes world that, while it never went away, was not the kind of mom-and-pop technology that soaked up all the capital and market share.
Apple, Adobe, MS, Corel, and all the software houses spent hundreds of millions of marketing dollars to push the glamour of WYSIWYG. Concepts of repurposing, semantic markup, hypertext links between documents, schema checking, document construction from components, let alone archiving or application-neut rality, were abandoned. The "failure" of SGML is the "failure" of Vi over PageMaker.
--Rick Jelliffe on the xml-dev mailing list, Friday, 05 Jan 2007 21:24:47
I've been doing a lot of JavaScript and DHTML and AJAX programming lately. Increasing quantities of it. Boy howdy. The O'Reilly DHTML book has gotten big enough to crush a Volkswagon Bug, hasn't it? And my CSS book has gone from pristine to war-torn in under a month. I've managed to stay in the Dark Ages of web programming for the past 10 years: you know, HTML, a little CGI, font color="red". Way old school. And now I'm getting the crash-course. The more I learn, the more I wish I'd known it for longer. Although then I'd have had to live through the long transition from Dark Ages to the muchly-improved situation we have today. Far from good, to be sure, but it's improved dramatically since last I looked.
--Steve Yegge
Read the rest in Stevey's Blog Rants: Blogger's Block #3: Dreaming in Browser Swamp
I don't consider blogging real writing (at least not in my case; occasionally you'll happen upon a blog that is a work of art). It's just "blogorrhea," part catharsis, part bully pulpit, partly a way of keeping in touch with friends and acquaintances since I am such a terrible correspondent.
--Poppy Z. Brite,
Read the rest in Writing
I want 2007 to be the year when the populace put its beautifully pedicured foot down and says, "That's it, no more panicked television hosts whipping up furor about What Everyone Else Is Doing (and How Can We Stop Them)?" I'm hoping in 2007, no one will really care What Everyone Else Is Doing as long as it's informed, consensual and they aren't hurting anyone else. Even if it involves the internet, a cell phone or a manatee costume.
--Regina Lynn
Read the rest in Wired News: Hoping for Good Sex in 2007
For use in Ajax, when exchanging moderately simple messages that will only be used internally, i.e. a web service only used by your own clients, JSON has the advantages of smaller message size, unless the messages are really tiny such as 'OK' etc, lack of need of other libraries to read and, if you don't use currently XML, nothing really new to learn. On the other hand XML is much more flexible and handles documents better, most platforms support it and it can take advantage of schemas, transforms, WSDL etc. which don't really exist in JSON. JSON still needs a library to create the object representation, otherwise it can be very tedious and repetitive, and is more of a risk, eval-ing a response could open the door to attacks, even if only annoyances for the client. So my view is, for relatively simple in-house messages JSON can have advantages, otherwise it loses to XML.
--Joe Fawcett on the xml-dev mailing list, Friday, 05 Jan 2007 09:55:38
Search engines are *ridiculously* sensitive to words that appear in the URI. On top of which, memorable URIs are more likely to survive transmission via cocktail napkin.
--Tim Bray on the atom-protocol mailing list, Wednesday, 25 Jan 2006 14:35:15
I don't think WYSIWIG is a workable conceptual model for HTML authoring since (X)HTML is all about what you mean, not what you see.
--Benjamin Hawkes-Lewis on the whatwg mailing list, Friday, 29 Dec 2006 12:47:33
The XML declaration isn't part of the data model that the XSLT processor sees. Its only purpose is to tell the XML parser how to construct the data model. By the time a document has been parsed its original encoding is of no further interest.
--Michael Kay on the saxon-help mailing list, Sunday, 20 Jan 2005 09:02:34
XForms bears about as much resemblance to HTML forms as a Bengal tiger has to a ferret.
--Kurt Cagle
Read the rest in Why XForms Matter, Revisited
2007 will be the year where LAMPers finally decide to stop being neutral about the WS-* mess and pick the side of REST: the next wave of Web APIs will stop supplying both a SOAP and REST API and just go with the latter.
--David Heinemeier Hansson
Read the rest in Where's i
Quotes in 2006 | Quotes in 2005 | Quotes in 2004 | Quotes in 2003 | Quotes in 2002 | Quotes in 2001 | Quotes in 2000 | Quotes in 1999 | http://www.cafeconleche.org/quotes2007.html | CC-MAIN-2016-44 | refinedweb | 18,278 | 66.17 |
Help:HowTo
From Uncyclopedia, the content-free encyclopedia
The way to contribute to Uncyclopedia is by editing articles. Even just talking to other users involves editing an article called a talk page. (This wiki does not have a Message Wall; Special:Chat is enabled but no one ever goes there except by invitation.) So you have to edit.
How to edit
How do you edit an article?
- Clicking on the Edit tab near the top of the page opens the entire article for editing.
- Clicking on the [edit] tab at the start of a section opens that section for editing. You can only make changes to that section, but other Uncyclopedians can edit other sections at the same time without confusing Uncyclopedia (or each other, with the dreaded Edit Conflict notice).
When you click to edit part or all of a page, the next thing you see is an edit box. If you are used to editing in Microsoft Word or Wordpad, where "what you see is what you get," the Uncyclopedia edit box will take some learning, because what you see is code — called wikicode.
To create a new page, surf to any page that does not exist, either by typing its desired name as a web address in the URL bar, or typing its desired name into the Search box, or following any red-link to it. Uncyclopedia will ask you if you want to create such a page; if so, you will get an edit box into which you can type the contents of the new page.
Start on the right foot
Usually, typing ordinary English sentences will get them displayed as you type them. Most of the commands that are wikicode involve sequences of characters you would not type ordinarily.
- To get a new paragraph, press Enter twice. (Put a blank line into the edit box.) Please do not press Enter more than twice, because that will give your article more blank space than other Uncyclopedia articles have. Pressing Enter once generally has no effect; it may look like a new paragraph in the edit box, but in the article, all your text will be wrapped together in a single paragraph.
- Do not indent by typing Tab or spaces at the start of a paragraph! Tab has no effect, and starting a paragraph with spaces has a special effect that you probably don't want.
- It's almost never necessary to type HTML into an edit box. You type wikicode, Uncyclopedia translates it to HTML, your article looks like the other articles, and we look like an encyclopedia.
Please don't try out any of your clever ideas by editing this page. Instead, why not try them out on the page about Finland? Or, use the sandbox.
Learn2Preview
When you click on the Save button below the edit box, your changes are written into the encyclopedia.
It would be nice if, before saving your edit, you would look at it and be sure you typed everything correctly and that you got the effects you wanted. Click on Preview to tell Uncyclopedia to "render" the text you were working on, the way it will look when you save it. The preview screen includes your edit box, and you can continue editing. Click on Save only when you reach a good stopping point. This lets you write the same stuff but record fewer official edits to the encyclopedia.
Before saving, fill in the Summary field. Briefly state what you were trying to do. We know that the editors who worked on it before you were all morons, but you don't have to explain that.
That's all the basics! The rest of this page explains coding to get special effects you may want.
Highlighting
- To get italic text, precede and follow the text with 2 single quotes:
''italic''
- To get bold text, precede and follow the text with 3 single quotes:
'''bold'''
If there is some reason you need to turn off wikicode in an area — perhaps you really need to type 2 single quotes and have it not cause italics — you can use the HTML tag <nowiki> at the start of the area, and </nowiki> at the end.
You can create math formulas using the HTML tags <math> and </math> and the LaTeX code for writing formulas. Please use these sparingly, because even a truly funny formula is funny to few readers, and everyone else will find it hard to read.
Lists
Creating a list involves typing a special character as the first character of a line of text.
- To create a bulleted list like this, start the line with
*.
- And
**creates a sub-item like this.
- To create a numbered list like this, start the line with
#.
- Do not number your items yourself! It forces you and every writer who comes after you to waste time renumbering if adding or deleting items.
- Do not put any other type of line or paragraph in the numbered list: no photos (see below) or even a blank line, because the next list item will go back to 1.
If you start a line with
: then that paragraph is indented.
- Here is an example.
If you start a line with a space, you get an example box.
Here is an example. This is sometimes useful to show coding examples; we use it lower on this page. It is also used to draw ASCII art, such as stick figures. This is a key component of a seriously ugly page. Be careful with this, because text inside the box doesn't wrap at all, and you don't know how wide your reader's screen is.
Section organization
An Uncyclopedia article that just goes on and on is unattractive and hard to read. It's better to break it into sections. To code a section heading: Put the text of the heading on a line by itself, preceded and followed by two equal signs. For example, the current section began with the coding:
==Section organization==
That's right: a "second-level head." Do not use
=First-level heads= (with just one equal sign); they are too large for Uncyclopedia.
Subsections
Within a section, if you want subsections, create a section heading with three equal signs before and after. For example, just above, we coded:
===Subsections===
This section is so small that it really didn't need to be a subsection, but it was a good example. You can go to four equal signs, but don't go crazy.
Style of section headings
For book titles and so on, you can use italics in section headings. Don't boldface your section headings, especially if it means you think your section is more important than anyone else's section.
If you have three or more section headings, Uncyclopedia automatically includes a Table of Contents in the article. Don't do this yourself; you will only make it harder for the next guy to change the section organization.
If for some reason, you should not want your article to have a Table of Contents, type the following line anywhere in the document:
__NOTOC__
Links
Links are places on your page where, if the reader points to it and clicks, he is taken to some other page. Links connect Uncyclopedia articles to one another, help the reader navigate, and make us look like Wikipedia.
To create a link to an article in the database, place double square braces around the keyword like this:
[[text here]]
Piped links. To link to something other than the literal title of the article, use a pipe (Shift+Backslash) like this:
[[article title|Link words]]
This piped link displays as Link words, but when the reader clicks on it, it actually goes to the article you named on the left side of the pipe. The true target article should be at least slightly related to the words that appear. (Or something completely unrelated that will be a hilarious. However, you will make the reader laugh less and less, the more you piss him off by sending him to places he doesn't want to go.)
A piped link to Pun to pat yourself on the back for having made a pun, or a piped link to Lies to express disagreement with something you have written, is valid, but making your funny point clearer would be better than hiding it in a piped link.
Altered links. Most Uncyclopedia articles are a singular noun. If your article wants to use the plural, type the s just after the closing double-square-brackets. The entire word as modified will be the link. For example:
[[Town meeting]]s or
[[Pimp]]ing or
[[Pwn]]age.
If you need a more drastic modification than just typing letters at the end, use a piped link as described above.
Links to sections. A link can give the reader a way not to surf to a different article but to a specific section in that article. The format is as follows:
[[Article title#Section heading]]
For example, our guidance when writing about races or political movements is at Uncyclopedia:Choice of Words#Extremists. Clicking that takes you directly to the section named Extremists.
Red links. The article you link to must already exist, and you must type its name exactly. Otherwise, the link won't work and it will appear in red. Whenever[1] you see a red-link, please fix it, either by removing the double-square-brackets, changing it to the name of a page that does exist, or using a piped link to do so without changing the wording that is displayed. (Some red links are funny, when they (1) are about something that is red, or (2) are about someone where the lack of an article shows what a minor player he is. Some red links are invitations to write an article, such as the missing ones on U.S. Presidents.)
Images
A bunch of potatoes
It's easy to include photos or illustrations in an Uncyclopedia article. A lot of graphics already exist on the site, and you may be able to find them by perusing an appropriate category or checking a similar Uncyclopedia article.
To get a new illustration onto the website, you have to do these things:
- Log in or pick a user name and register as an Uncyclopedian,
- Get the illustration onto your computer, and
- Go to Special:Upload and follow the instructions to upload it from there to Uncyclopedia.
If using an existing illustration, note its name. If uploading a new one, give it a descriptive name and remember what you called it.
Now edit the article you want to include the illustration. Just before the paragraph you want the illustration to line up with, type code like this on a line by its own:[1]
[[Image:imagename|thumb|right|sizepx|caption]]
In this format:
rightmeans it hugs the right edge of the page, like the one shown in this section. You can instead type
left.
imagenameis the name of the image you want to include, including its extension.
sizepxis how many pixels wide you want the picture to be. For example:
200px[2]
captionis the caption you want beneath it.
For example,
[[Image:Potatoes.jpg|right|thumb|200px|A bunch of potatoes]] was the code that produced the picture at the start of this section. Anything right of the first pipe character can be left out.
- ↑ If you stick this code in the middle of a paragraph, the paragraph won't render correctly on the web page.
- ↑ Sadly, there is no way to make this vary according to how large the reader's browser window is, and you have no way of knowing how large it is. So please don't spend all evening trying to make this just the right size!
Media
Media (audio and movies) can be included in your article. You must use this sparingly, because our emphasis is original comedy writing, not sharing YouTubes or cataloguing funny stuff that exists elsewhere on the web.
To embed a media file, upload it to Uncyclopedia if it isn't already here (in the same way as for photos, see above), and use this format:
[[Media:Name of media file]]
External links
Links to other websites are discouraged. Again, we are about creating original comedy writing. In many cases where an author points to another website, a goal may be to provide "evidence" and the author may be engaged in advocacy: Instead of writing something funny about a celebrity, he points to news about the celebrity to show readers how stupid the celebrity is.
However, if you must create an external link, use single square brackets. After the opening bracket, type the URL (now sometimes called URI), complete with
http: or
https: or whatever.
[URL]
It will appear in the article as a reference number. If you would like it to appear as something else, then before typing the closing bracket, type a space, then the text that the reader would click on to be taken to the specified web page. For example:
One of our helpful Abuse Filter robots will ask you to confirm that you really need to do this.
Categories
Categories are another tool that helps readers and authors find related articles. Many articles and photos can be put into a category, such as Category:United States presidential election, 2016. If you are writing a new article related to this category, you can click on the category at the bottom of the page to see a list of items that may be useful to you.
You use the same double-square-bracket code to include a page in a category, typically at the bottom of the edit box:
[[Category:category name here]]
An article can be part of several categories.
If you are editing a category page, adding that page to an existing category will make it a subcategory within that category. Categorically speaking.
- Tip: check what categories already exist. If you use a category that already exists, is relevant, and contains many pages, then more people are likely to stumble across your new page. Category:Everything is also a good place to look for relevant categories. Just browse through the subcategories.
To refer to a category without making your article part of that category, precede the category with a colon:
[[:Category:category name here]]
This article used this technique above, when discussing the category on the 2016 election without having this article actually go into that category.
Sort order. If you are writing about a person, use a piped category link so that, when it is placed in a category, the list is sorted by last name. For example, if writing an article on "Joe Bloggs", have it sort as though its name were "Bloggs, Joe" by coding:
[[Category:category name here|Bloggs, Joe]]
Templates
If you use double-squiggly-braces rather than double-square-brackets, what you get is a template. A template is not a reference to a page; it is a command to copy the entire text of a page and insert it into the page you are writing. (You may know it as a "macro.")
Uncyclopedia will assume the template is a page in the
Template: namespace, such as Template:Cquote. But it doesn't have to be. Don't create a template there for purely personal use; a template in your userspace works just as well. For example, as explained in UN:SIG, you can create a signature file at User:<insert name here>/signature and then type
{{User:<insert name here>/signature}} to have the contents of your signature file come in wherever you sign a talk page.
Most of the time, rather than creating your own template, you'll use the many templates available on the website. Some of these are listed at Uncyclopedia:Templates. Keep in mind when you use templates that they are basically gimmicks, and writing creative and funny stuff is always better than using gimmicks (which, being in a template, means the same thing has been done many times before).
A template can have parameters (or arguments), which are commands to the template that affects the text it inserts in your page. They use the pipe (Shift+Backslash) symbol, like this:
{{name of template|first argument|second argument|foo=value for argument "foo"}}
For example, the {{Q}} template for quotations has arguments for the text of the quote and for who said it. There is a language involving lots of squiggly braces and pipe symbols so that a template combines these arguments as desired, creating text to be inserted into the page that calls it.
There's one template you should know about: {{WIP}}. If you have stuck an unfinished article in the main encyclopedia, use this to label it a Work In Progress. This template asks Admins not to delete it, unless you abandon it for a week, in which case it begs them to delete it.
Other techniques
To create a horizontal line, simply type 4 dashes on a line by itself
----
A horizontal line looks like this:
Use a horizontal line to separate distinct parts of a section. However, Uncyclopedia usually uses a new section heading to do this. Don't code horizontal lines simply to make a third-level section look like a second-level section. | http://uncyclopedia.wikia.com/wiki/Beginner's_Guide/Formatting | CC-MAIN-2016-18 | refinedweb | 2,897 | 60.24 |
The Awesome Power of Theory
Explorations in the untyped lambda calculus
Ron Garret
June 2014
0. TL;DR
This is a very long, very geeky post about using an extremely simple computational formalism to do something much more complicated and useful than you would think it was capable of. Specifically, it's about using the pure untyped lambda calculus to compute the factorial of 10. This post assumes some basic knowledge of Lisp.
1. Introduction
I want to show you a little technological miracle. This, believe it or not, is the factorial function:
(λ (n f) (n (λ (c i) (i (c (λ (f x) (i f (f x))))))(λ x f) (λ x x)))
The reason you might be skeptical that this is the factorial function is that there are no mathematical operations in it. You are probably used to seeing the factorial written something more like this:
(defun fact (n) (if (<= n 1) 1 (* n (fact (1- n)))))
Or maybe this:
int fact(int n) { return (n<=1 ? 1 : fact(n-1)); }
Regardless of the syntax, you’d expect a factorial function to include some math, and there doesn’t seem to be any math there. In fact, there doesn’t seem to be much of anything there, so you would be rightly skeptical about my claim that this is in fact the factorial function. So to prove it, here it is running on a real computer, computing the factorial of 10:
? (λ ()
((λ (f s) (f (s (s (s (s (f (s (s (s (λ (f x) x)))))))))))
(λ (n f) (n (λ (c i) (i (c (λ (f x) (i f (f x))))))
(λ x f) (λ x x)))
(λ (n f x) ((n f) (f x)))
'1+ 0))
3628800
Actually, what this code does is compute the factorial of 3, add four to that, and then compute the factorial of the result. The whole thing takes less than a second to run on my MacBook Pro. If you want to try this yourself, here's the definition of λ in Common Lisp:
(defmacro λ (args body)
(if (and args (atom args)) (setf args (list args)))
(if (and (consp body) (consp (car body))) (push 'funcall body))
(if (null args)
body
`(lambda (&rest args1)
(let ((,(first args) (first args1)))
(declare (ignorable ,(first args)))
(flet ((,(first args) (&rest args2)
(apply ,(first args) args2)))
(if (rest args1)
(apply (λ ,(rest args) ,body) (rest args1))
(λ ,(rest args) ,body)))))))
This code is complete and self-contained. You can cut-and-paste this code into any Common Lisp and it should work. Note that there are no calls to any Common Lisp math functions (except 1+) and no numbers other than 0.
This is more than just a parlor trick. This version of the factorial is the last step in a long process that contains deep insights into how programming languages work under the hood. Those twelve lines of code that define the λ macro are actually a compiler for a programming language called the untyped lambda calculus. The λ macro compiles the lambda calculus into Common Lisp, and from there it gets compiled into machine code. This lambda calculus is closely related to Lisp (which is why it is so easy to compile it into Lisp), but it isn’t Lisp. For starters, it doesn’t have lists. It doesn’t have anything except the ability to create functions and call them. No lists, no math, no primitive operations of any kind. It turns out you don’t need them.
The following sections will walk you step-by-step through the process of building a factorial function out of the lambda calculus. It is the closest thing I know to literally building something from (almost) nothing.
2. A quick introduction to the lambda calculus
I'm going to use Lisp notation because we're going to actually implement the lambda calculus in Lisp. The lambda calculus consists of three kinds of expressions:
<![if !supportLists]>1. <![endif]>Identifiers
<![if !supportLists]>2. <![endif]>Function applications
<![if !supportLists]>3. <![endif]>Lambda expressions
Identifiers are simply letters or words, like ‘a’, ’x’, or ‘foo’. They have no semantic meaning, and we could just as well use pictures instead of letters and words. But using descriptive words can sometimes make it easier to understand what is going on.
Function applications consist of two expressions surrounded by a matched pair of parentheses, i.e.:
([expression1] [expression2])
Here are some examples:
(f x)
(sin a)
(cos (sqrt x))
((f g) (h (y z)))
Note that “sin”, “cos” and “sqrt” don’t have any special meaning in the lambda calculus. There are no math operations or primitives of any kind. There are no numbers, nor any other data types (except functions).
The third kind of expression, lambda expressions, look like this:
(λ [identifier] [expression])
So lambda expressions look similar to function applications, but they have three elements instead of two, and the first is always λ and the second is always a single identifier. (Later we will extend the notation to allow multiple arguments, but this will just be a notational shorthand. It won’t actually change the language.)
A lambda expression denotes a function whose argument is [identifier] and whose value is [expression]. There’s a little more to it than that, but for our purposes that’s all really you need to know. So, for example, this is the identity function:
(λ x x)
And that’s it. That’s all there is to the lambda calculus. There are no primitives, no numbers, no strings, no characters, no data types of any kind, only functions, and only functions of exactly one argument. How could you possibly ever build anything useful out of that?
The trick to wringing useful behavior out of the lambda calculus is that functions are first-class entities. That means that you can pass them as arguments and return them as values. So, for example, here is the identity function applied to itself:
((λ a a) (λ a a))
Here the identity function (λ (a) a) plays the role of both f and a in the construct (f a). The result of actually calling the identity function on itself is, of course, the identity function.
Now, if you actually try to run this code in Common Lisp using the λ macro above (and you can, and I encourage you to do so) you will probably (depending on which Common Lisp you are using) see something like this:
? ((λ a a) (λ b b))
#<Anonymous Function #x302000F5726F>
This is because the functions we build using λ are actually converted into Common Lisp functions by the λ macro, and thence compiled by the Common Lisp compiler into executable machine code. For example, here's the result of compiling the identity function:
? (disassemble (λ a a))
;;; (λ (a) a
L0
(leaq (@ (:^ L0) (% rip)) (% fn)) ; [0]
(movl (% nargs) (% imm0.l)) ; [7]
(subq ($ 24) (% imm0)) ; [9]
(jle L30) ; [13]
(movq (% rbp) (@ 8 (% rsp) (% imm0))) ; [15]
(leaq (@ 8 (% rsp) (% imm0)) (% rbp)) ; [20]
(popq (@ 8 (% rbp))) ; [25]
(jmp L34) ; [28]
L30
(pushq (% rbp)) ; [30]
(movq (% rsp) (% rbp)) ; [31]
L34
(leaq (@ (:^ L53) (% fn)) (% temp2)) ; [34]
(nop) ; [41]
(nop) ; [44]
(jmpq (@ .SPHEAP-REST-ARG)) ; [46]
L53
(leaq (@ (:^ L0) (% rip)) (% fn)) ; [53]
(pushq (% save0)) ; [60]
(movq (@ -8 (% rbp)) (% save0)) ; [62]
(movl (% save0.l) (% imm0.l)) ; [66]
(andl ($ 7) (% imm0.l)) ; [69]
(cmpl ($ 3) (% imm0.l)) ; [72]
(jne L177) ; [75]
(pushq (@ 5 (% save0))) ; [77]
(movl (% save0.l) (% imm0.l)) ; [81]
(andl ($ 7) (% imm0.l)) ; [84]
(cmpl ($ 3) (% imm0.l)) ; [87]
(jne L185) ; [90]
(movq (@ -3 (% save0)) (% temp0)) ; [92]
(cmpb ($ 11) (% temp0.b)) ; [96]
(je L159) ; [99]
(movq (@ -24 (% rbp)) (% arg_z)) ; [101]
(pushq (% arg_z)) ; [105]
(movl (% save0.l) (% imm0.l)) ; [106]
(andl ($ 7) (% imm0.l)) ; [109]
(cmpl ($ 3) (% imm0.l)) ; [112]
(jne L193) ; [115]
(movq (@ -3 (% save0)) (% arg_z)) ; [117]
(movq (@ -32 (% rbp)) (% temp0)) ; [121]
(xorl (% nargs) (% nargs)) ; [125]
(leaq (@ (:^ L141) (% fn)) (% temp2)) ; [127]
(jmpq (@ .SPSPREADARGZ)) ; [134]
L141
(leaq (@ (:^ L0) (% rip)) (% fn)) ; [141]
(movq (@ -16 (% rbp)) (% save0)) ; [148]
(jmpq (@ .SPTFUNCALLGEN)) ; [152]
L159
(movq (@ -24 (% rbp)) (% arg_z)) ; [159]
(addq ($ 8) (% rsp)) ; [163]
(popq (% save0)) ; [167]
(leaveq) ; [169]
(retq) ; [170]
L177
(uuo-error-reg-not-list (% save0)) ; [177]
L185
(uuo-error-reg-not-list (% save0)) ; [185]
L193
(uuo-error-reg-not-list (% save0)) ; [193]
That seems like an awful lot of work to do exactly nothing. The reason the identity function turns into so much code will become clear later. For now, we just need a better way to keep track of what is going on. To do that, we're going to give ourselves the ability to assign names to the functions we build, and also to keep track of some extra bookkeeping information:
(defmacro name (thing &optional name)
(let* ((val (eval thing))
(class (class-name (class-of val))))
`(progn
(defmethod print-object ((x (eql ',val)) stream)
(format stream "#<~A ~A>" ',class ',(or name thing)))
,thing)))
(defmacro define (name value)
`(progn
(setf (get ',name 'λ) ',value)
(defun ,name (&rest args) args) ; Suppress warnings
(setf (symbol-value ',name) ,value)
(define-symbol-macro ,name (symbol-value ',name))
(defun ,name (&rest args) (apply ,name args))
(name ,name)))
DEFINE is not part of the lambda calculus, it's just a convenient debugging tool to let us peer under the hood and see what is going on. So, for example:
? (define _id (λ a a))
#<FUNCTION _ID>
? (_id _id)
#<FUNCTION _ID>
I'm (mostly) going to use the convention of starting the names of lambda calculus functions with an underscore to distinguish them from Common Lisp functions. Remember, we are "compiling" the lambda calculus to Common Lisp with the λ macro, so we can freely mix lambda calculus and Common Lisp functions and data types if we choose to:
? (_id 123)
123
But this is cheating because 123 (and all numbers) are part of Common Lisp, not the lambda calculus.
(NOTE: At this point you might object that we are "cheating" in the factorial example because we're using the number 0 and the Common Lisp function 1+. But, as we shall see, those aren't actually used in the calculation itself. They are just used at the end to convert the result of the calculation into a format we can read.)
3. Closures
So how are we going to do anything useful with no primitives? We can build an identity function, but it's hard to see how we can do anything much more than that without "cheating" and using some Common Lisp primitives.
The key idea, the thing that gives the lambda calculus all of its computational power, is that of a closure. Because functions are first-class entities, we can write a function that returns a newly constructed function as its value. Here's the simplest possible example:
(λ a (λ b a))
This looks a lot like a function that returns the identity function, but notice that the function being returned is not the identity function. The value returned by the inner function (λ (b) a) is not its own argument b, it is the argument of the outer function, a. The upshot of this is that when you call this outer function, what you get back is a new function that has captured the value that you passed in, to the outer function. When you call this new function, you get back the captured value. In effect, we have built a memory location.
Here it is in action. It will be easier to understand if we use DEFINE to give these functions names:
? (define f1 (λ a (λ b a)))
#<FUNCTION F1>
? (define f2 (f1 123))
#<FUNCTION F2>
? (f2 456)
123
The value we pass 123 in to f1 it gets stored in the returned function f2. When we call f2 we get the stored value back. (Exercise: what happens to the value we pass in to f2 (i.e. the 456)?)
4. Cons cells, multi-argument functions, and Currying
Now that we have built a storage location using a closure we are tantalizingly close to being able to build one of the key primitives that will let us build a fully-fledged Lisp: the cons cell. A cons cell is an aggregate of two storage locations, which historically were called the CAR and the CDR, but which we will call LEFT and RIGHT (or LHS and RHS for left-hand-side and right-hand-side). Besides being easier to read, if we try to name them CAR and CDR then we will have naming conflicts with our underlying Common Lisp and the compiler will complain. For the same reason I'm going to use the term "pair" instead of "cons cell".
So what we want to do is to build a function that takes two values and stores them both in the same way that we stored a single value in the previous section. So we want to do something like:
(define _pair (λ (left right) ...? ))
But now we have a problem: lambda calculus functions only take one argument. So how can we possibly write _pair? By definition, it has to take two arguments. It would seem that we're stuck. (It's worthwhile stopping at this point to see if you can figure out how to solve this problem before continuing.)
One possibility is to build up our pairs in stages. Recall our single storage cell that we built using a closure:
(λ a (λ b a))
Notice that the inner function takes an argument that gets discarded, never to be seen again. What if instead of discarding this value, and instead of just returning a, we instead returned another closure that has access to both a and b? Something like:
(define _pair (λ a (λ b (λ c ...? ))))
That seems promising, but this lack of primitives is vexing. How could we possibly extract a and b from the inner closure? What we would like to do is to have c play the role of a selector to choose which of the two captured values, a or b, gets returned, but how do we do that? We don't have an IF statement. We don't even have any actual data types that we can use to represent which value we want. All we have is functions (and only functions of one argument). So how do we build a selector?
We're going to do this in two stages. First, we are going to build multi-argument functions, because if we don't we are going to drown in a sea of nested parentheses. And then we will build our selector, and from that we will build a pair.
It turns out we don’t actually need "real" multi-argument functions. Instead we can simply introduce the following notational convention (where ::= means "is defined as"):
(λ (a b) ...) ::= (λ a (λ b ...))
(f a b) ::= ((f a) b)
So if we were to re-do our original closure example, in our original "pure" notation it would look like this:
? (((λ a (λ b a)) 123) 456)
123
And in our new "compactified" notation it looks like this:
? ((λ (a b) a) 123 456)
123
Because multi-argument functions are "really" nested single-argument functions, we don't have to pass all the arguments in at once. We can do "partial evaluation":
? (define f ((λ (a b) a) 123))
#<COMPILED-LEXICAL-CLOSURE F>
? (f 456)
123
This "partial application" of a function is also called "currying", after Haskell Curry, who invented the idea. Notice that the natural consequence of this definition is that a “function of zero arguments” doesn't have to be "called". Such a function is equivalent to its value, i.e. (λ () x) ::= x.
Before we go on to build pairs, let's see how we can use currying to build a conditional construct, an "if" function. We want to define the following functions:
(define _if (λ (condition then else) ...))
(define _true (λ (...) ...))
(define _false (λ (...) ...))
in such a way that the following holds:
(_if _true then else) ➔ then
(_if _false then else) ➔ else
We accomplish this by making _true and _false functions that take then and else as arguments and return the appropriate one (i.e. _true and _false are selectors):
(define _true (λ (then else) then))
(define _false (λ (then else) else))
Our condition argument is now a selector, so all _if has to do is call the condition on the then and else arguments:
(define _if (λ (condition then else) (condition then else)))
Let's see if it works:
? (_if _true 123 456)
123
? (_if _false 123 456)
456
Looks good. Now we can use the same basic technique to build a pair:
(define _pair (λ (l r) (λ (selector) (selector l r))))
(define _lhs (λ (pair) (pair (λ (l r) l))))
(define _rhs (λ (pair) (pair (λ (l r) r))))
The selectors themselves are embedded inside the _lhs and _rhs functions, and notice that they are in fact exactly the same as _true and _false. In fact, we could have written:
(define _pair (λ (l r) (λ (selector) (selector l r))))
(define _lhs (λ (pair) (pair _true)))
(define _rhs (λ (pair) (pair _false)))
Let's see if it works:
? (_lhs (_pair 123 456))
123
? (_rhs (_pair 123 456))
456
Cool! Now that we have pairs (a.k.a. cons cells) we are one step away from being able to build a linked list, and once we can do that we can build Lisp. The only thing missing is the empty list, or NIL, which we will need to mark the end of a list. So we need two more functions:
(define _nil (λ (a) ...? ))
(define _null? (λ (thing) ...? ))
With the property that:
(_null? _nil) ➔ _true
(_null? anything_else) ➔ _false
See if you can figure out how to make this work before reading on. It's quite challenging, but you have all the tools you need to figure it out at this point.
5. Lazy vs eager evaluation
We are starting to converge on all the things we need to build a factorial function. The basic strategy is that we are going to use linked lists to represent numbers. The integer N will be represented as a linked list of length N. So adding two numbers will boil down to appending two lists, which we already know how to do (assuming we can write _nil and _null). We just take the traditional Lisp definition of APPEND:
(defun append (l1 l2)
(if (null l1) l2 (cons (car l1) (append (cdr l1) l2))))
and translate it into the lambda calculus:
(define _append
(λ (l1 l2)
(_if (_null? l1)
l2
(_pair (_lhs l1) (_append (_rhs l1) l2)))))
But if we try to run this, we will find we have two problems. First, we haven't defined _nil and _null? so here they are:
(define _nil (λ (selector) _true))
(define _null? (λ (pair) (pair (λ (l r) _false))))
Let's verify that this works:
? (_null? _nil)
#<FUNCTION _TRUE>
? (_null? (_pair _nil _nil))
#<FUNCTION _FALSE>
The second problem is more serious:
? (_append (_pair 1 _nil) (_pair 2 _nil))
> Error: Stack overflow on temp stack.
Why did this happen? It's because _if is a function, but in order to actually run this code it has to be a control construct. Here is an illustration of the problem:
? (_if _true (print 123) (print 456))
123
456
123
Because we are compiling this code into Common Lisp, and _if is a function, both branches get evaluated before the choice is made of which value to return. As a result of this, our recursive definition of _append never "bottoms out". It's an infinite loop, because the "else" clause of the _if always gets evaluated, so it always calls _append again.
There are two ways to solve this problem. The first is to make λ lazy, that is, to only evaluate arguments that are actually returned as values. This is what languages like Haskell do. And we could write a lazy λ in Common Lisp. But it turns out we won't actually need it in the end. So instead, for now, we're simply going to cheat, and define a __if macro (with two underscores) that turns our _if function into a control construct:
(defmacro __if (condition then else)
`(if (_if ,condition t nil) ,then ,else))
This uses the _if function to translate a lambda-calculus condition into the Common Lisp values T or NIL, and then uses Common Lisp's IF control construct to choose whether to evaluate the THEN or the ELSE clause. When we rewrite _append using __if instead of _if it works:
? (define _append
(λ (l1 l2)
(__if (_null? l1)
l2
(_pair (_lhs l1) (_append (_rhs l1) l2)))))
#<FUNCTION _APPEND>
? (define my-list (_append (_pair 1 _nil) (_pair 2 _nil)))
#<COMPILED-LEXICAL-CLOSURE MY-LIST>
? (_lhs my-list)
1
? (_lhs (_rhs my-list))
2
? (_null? (_rhs (_rhs my-list)))
#<FUNCTION _TRUE>
6. The Y combinator
We still have one serious "cheat" left in our code: our _append function recursively calls itself by name. This is only possible because we've given it a name using our DEFINE macro, but DEFINE is not part of the lambda calculus. Can we make a recursive function without DEFINE? The answer is yes. The way it's done is to use something called the Y combinator. This is a topic unto itself, and again, it will turn out we don't actually need it in the end, but if you want to take a deep dive into it at this point (and it's not at all a bad idea) I recommend this tutorial.
For our purposes, I'm simply going to tell you that there's this mysterious thing called the Y combinator, and it's defined like this:
(define y (λ (f) ((λ (g) (g g)) (λ (h x) ((f (h h)) x)))))
What Y does is take a function that is "almost recursive but not quite" (because it has no way to refer to itself) and "feed it back into itself" so that it can call itself recursively. Here's an illustration of how the y combinator is used:
? (define almost-factorial
(λ (f) (λ (n) (if (zerop n) 1 (* n (f (1- n)))))))
#<FUNCTION ALMOST-FACTORIAL>
? (funcall (y almost-factorial) 10)
3628800
ALMOST-FACTORIAL takes a function F and assumes that F is the factorial function. It then uses F to compute the factorial. (Note that we're cheating by using Common Lisp to do our math for us.) What Y does is essentially solve the following mathematical equation:
FACTORIAL = (ALMOST-FACTORIAL FACTORIAL)
If you want to know the details read the tutorial linked to above. Like I said, we will ultimately not use the Y combinator at all, and we have many other fish to fry.
7. Pairnums
We are finally at the point where we can take a first cut at writing a factorial function in pure lambda calculus. Our strategy is to represent numbers as linked lists, where an integer N is represented as a list of length N. We can then build up arithmetic like so:
(define pn_0 _nil)
(define pn_zero? _null?)
(define pn_add1 (λ (n) (_pair _nil n)))
(define pn_sub1 (λ (n) (_rhs n)))
PN stands for Pair-Number (pairnum for short), i.e. a number represented as lambda-calculus pairs. Zero (i.e. pn_0) in this representation is _nil, the empty list. We add one by creating a new pair that contains the pairnum that we added one to. To subtract one we simply reverse this process.
It is now straightforward to build addition, multiplication, and ultimately, the factorial function. We start by defining addition, first recursively:
(define pn_+ (λ (n1 n2)
(__if (pn_zero? n2)
n1
(pn_+ (pn_add1 n1) (pn_sub1 n2)))))
Then we convert this to a "non-cheating" version using the Y combinator:
(define pn_+ (y (λ (f)
(λ (n1 n2)
(__if (pn_zero? n2)
n1
(f (pn_add1 n1) (pn_sub1 n2)))))))
And likewise for multiplication, which is recursive addition:
(define pn_* (λ (n1 n2)
((y (λ (f)
(λ (n1 n2 product)
(__if (pn_zero? n2)
product
(f n1 (pn_sub1 n2)
(pn_+ n1 product))))))
n1 n2 pn_0)))
Note that multiplication is not exactly the same as addition. In the case of addition we could just add one to one number and subtract one from the other until we got to zero. But in the case of addition we need a third variable to accumulate the result.
Finally, the factorial:
(define pn_fact (y (λ (f)
(λ (n)
(__if (pn_zero? n)
(pn_add1 pn_0)
(pn_* n (f (pn_sub1 n))))))))
To make it easier to see what's going on at this point, let us define two utility functions to convert back and forth between pairnums and ordinary Lisp integers. Because these are just for debugging, we’ll simply define them in Lisp:
(defun pn (n &optional (pn pn_0))
(if (zerop n) pn (pn (1- n) (pn_add1 pn))))
(defun ppn (n &optional (cnt 0))
(__if (pn_zero? n) cnt (ppn (pn_sub1 n) (1+ cnt))))
PN converts a Lisp integer into a pairnum, and PPN (Print Pair Num) does the opposite:
? (ppn (pn 10))
10
We can use these to test our factorial:
? (ppn (pn_fact (pn 6)))
720
It works! But why did we only do the factorial of 6? Weren't we going to compute the factorial of 10? Here’s the reason:
? (time (ppn (pn_fact (pn 6))))
(PPN (PN_FACT (PN 6)))
took 135,980 microseconds (0.135980 seconds) to run.
77,331 microseconds (0.077331 seconds, 56.87%) of which was spent in GC.
During that period, and with 4 available CPU cores,
146,451 microseconds (0.146451 seconds) were spent in user mode
8,753 microseconds (0.008753 seconds) were spent in system mode
39,398,912 bytes of memory allocated.
801 minor page faults, 0 major page faults, 0 swaps.
720
? (time (ppn (pn_fact (pn 7))))
(PPN (PN_FACT (PN 7)))
took 4,744,137 microseconds (4.744137 seconds) to run.
2,522,207 microseconds (2.522207 seconds, 53.16%) of which was spent in GC.
During that period, and with 4 available CPU cores,
4,363,321 microseconds (4.363321 seconds) were spent in user mode
431,969 microseconds (0.431969 seconds) were spent in system mode
1,636,124,272 bytes of memory allocated.
26,995 minor page faults, 0 major page faults, 0 swaps.
5040
Notice that it took about 35 times longer to compute the factorial of 7 than the factorial of 6. This isn’t too surprising. We’ve gone out of our way to hamstring ourselves and make our code about as inefficient as it could possibly be. At best, computing the factorial of N this way would require N! steps (since that is how long it would take to construct the answer directly).
At this point you might reasonably conclude that this theory is little more than an interesting academic curiosity. I mean, come on, if you can’t even compute the factorial of 8 in non-geological time?! But bear with me, I promise you it gets better.
Before we proceed, though, let us take a moment to reflect on what we have actually done here. The code we've written looks like ordinary Lisp code, but in fact it is all built exclusively out of a single primitive: λ. In fact, we can expand our definition of pn_fact to see what it would look like if we actually wrote it all out in terms of λ. To do that manually would be very tedious, so we'll write a little helper function (in Lisp):
(defun lc-expand (expr &optional expand-lambdas)
(if (atom expr)
(if (get expr 'λ)
(lc-expand (get expr 'λ))
expr)
(if (and expand-lambdas (eq (car expr) 'λ))
(lc-expand (macroexpand-1 expr))
(mapcar 'lc-expand expr))))
This uses the raw lambda-calculus expressions that the DEFINE macro saves for us to expand a lambda-calculus function into raw lambdas. This is what our factorial function ends up looking like when expanded:
()))))))
Wow, what a mess! No wonder it’s the very definition of horrible inefficiency. But notice that it's nothing but lambdas (and __if's)! And it actually works (for some value of “works”). But how do we get from this awful mess to the tiny compact version at the top of this post?
NOTE: If you actually try running lc-expand yourself you will not see the nicely formatted code that appears above. Instead you will see this:
()))))))
The Λ symbol is an upper-case lambda, because Common Lisp by default translates all symbols into upper-case. The nicely formatted version was produced by this code:
(defun lc-expand-pretty (expr)
(string-downcase
(let ((*print-pretty* t)
(*print-right-margin* 100))
(princ-to-string (lc-expand expr)))))
8. Reduce
If you look at the expanded pairnum factorial above you will see a lot of repetitions of the Y combinator. This is because this definition contains embedded versions of the addition and multiplication functions, each of which uses a separate Y combinator to produce recursion.
Note, however, that all of our recursions have a similar pattern: they all traverse a linked list (because that’s how we’re representing numbers) while performing some operation and accumulating a result. For addition, that operation is adding 1. For multiplication, that operation is adding one of the multiplicands. We can “abstract away” the traversal of the list so that we don’t have to do use so many Y combinators. The way this is done is to define a function called REDUCE. (And yes, this is the same “reduce” referred to in the “map-reduce” idiom made famous by Google. You will shortly see why “reduce” is a big deal.)
This is what the REDUCE function is going to look like when we’re done with it:
(define _reduce (λ (f i l)
(__if (_null? l)
i
(_reduce f (f i (_lhs l)) (_rhs l)))))))
_reduce takes a function f, an initial-value i, and a linked list l, and repeatedly calls f on i and the left-hand-side (i.e. the first element) of l. By passing pn_add1 as f we can build an addition function, and by passing that addition function as f we can build multiplication. But before we can do that we have to define _reduce properly. At the moment we have cheated by having _reduce call itself recursively, and we’re not allowed to do that. So we have to fix that first by defining an almost-finished version of _reduce and then transforming that into a recursive version using a Y combinator:
(define almost_reduce
(λ reduce (λ (f i l)
(__if (_null? l)
i
(reduce f (f i (_lhs l)) (_rhs l)))))))
(define _reduce (Y almost-reduce))
or, if we expand it out:
(define _reduce
(y (λ r (λ (f i l)
(__if (_null? l) i (r f (f i (_lhs l)) (_rhs l)))))))
Now we can redefine addition and multiplication in terms of _reduce:
(define pn_+ (λ (n1 n2)
(_reduce (λ (total n) (pn_add1 total)) n1 n2)))
(define pn_* (λ (n1 n2)
(_reduce (λ (product n2) (pn_+ product n1)) pn_0 n2)))
Let’s see where this has brought us:
? (time (ppn (pn_fact (pn 6))))
(PPN (PN_FACT (PN 6)))
took 14,669 microseconds (0.014669 seconds) to run.
12,796 microseconds (0.012796 seconds, 87.23%) of which was spent in GC.
During that period, and with 4 available CPU cores,
15,942 microseconds (0.015942 seconds) were spent in user mode
325 microseconds (0.000325 seconds) were spent in system mode
1,420,912 bytes of memory allocated.
2 minor page faults, 0 major page faults, 0 swaps.
720
My, that looks promising. A factor of 10 performance improvement. Let’s see how far we can go now:
? (time (ppn (pn_fact (pn 7))))
(PPN (PN_FACT (PN 7)))
took 75,560 microseconds (0.075560 seconds) to run.
61,365 microseconds (0.061365 seconds, 81.21%) of which was spent in GC.
During that period, and with 4 available CPU cores,
84,667 microseconds (0.084667 seconds) were spent in user mode
2,380 microseconds (0.002380 seconds) were spent in system mode
9,371,120 bytes of memory allocated.
129 minor page faults, 0 major page faults, 0 swaps.
5040
Looking good! From 4.7 seconds down to 0.075, a 60x speedup. Apparently our speedup is not just a constant factor, we have achieved a reduction in our big-O complexity. Let’s keep going…
? (time (ppn (pn_fact (pn 8))))
(PPN (PN_FACT (PN 8)))
took 508,120 microseconds (0.508120 seconds) to run.
408,459 microseconds (0.408459 seconds, 80.39%) of which was spent in GC.
During that period, and with 4 available CPU cores,
507,662 microseconds (0.507662 seconds) were spent in user mode
14,311 microseconds (0.014311 seconds) were spent in system mode
72,110,448 bytes of memory allocated.
677 minor page faults, 0 major page faults, 0 swaps.
40320
From not being able to compute the factorial of 8 in any reasonable amount of time down to being able to do so in half a second. Not bad!
? (time (ppn (pn_fact (pn 9))))
> Error: Stack overflow on temp stack.
Ooh! So close! But again, this is not too surprising. We’re building up numbers as linked lists (represented as closures!) and so unless everything is properly tail recursive we’re going to grow the stack at least as large as our result. The factorial of 9 is 362880 so it’s not surprising that we blew our stack.
Still, we’ve come a long way with a single optimization. Maybe there’s another big win out there?
9. Church numerals
The next optimization we’re going to make is to change the representation of numbers. Instead of representing the number N as a linked list of length N (which in turn is represented as a series of closures) we’re going to use a representation which is more “native” to the lambda calculus. The Lambda calculus looks a lot like Lisp, and the design of Lisp was heavily influenced by it of course, but Lisp is fundamentally based on linked lists and S-expressions while the lambda calculus is fundamentally based on functions. We can build linked lists out of functions as we have seen, but it’s awkward and inefficient. The real brilliance of Lisp is that it gives you CONS, CAR and CDR as primitives. We can build those out of functions as we have seen, but just because we can doesn’t mean we should.
Instead, we are going to “go native” and represent a number as a repeated application of a function. In other words, the number N will be represented as a function that looks sort of like this:
(λ x (f (f (f … [N times] … (f x) …)
What will the function f be? Anything we want! We can just make it a parameter:
(λ (f x) (f (f (f … [N times] … (f x) …)
So the number 0 is:
(λ (f x) x)
i.e. a function that takes an argument x and a function f and applies f to x zero times. The number 1 is:
(λ (f x) (f x))
2 is:
(λ (f x) (f (f x)))
and so on. In general, a number n will be a function defined by the following equation:
(n f x) ::= (f (f (f … [ n times ] … x)))
Numbers represented this way are called Church numerals, after Alonzo Church who invented them along with the lambda calculus. It’s not immediately apparent how this change in representation is going to help, but trust me, it will.
At this point we’re just building these things manually. If we want to compute we need to build arithmetic. As we saw in the case of pairnums, it suffices to write functions that can add and subtract 1. Everything else can be built from that. So to add 1 we need to do this:
(define cn_add1 (λ n (λ (f x) (f (n f x)))))
The CN in CN_ADD1 stands for Church Numeral of course. It is easy to see that this function does the Right Thing if you remember the defining equation above. If (n f x) is f applied n times, then (f (n f x)) must be f applied n+1 times, i.e. the Church numeral representation of n+1.
Notice that Church numerals are, essentially, little loops that iterate N times. We can use this fact to do all kinds of spiffy things. For example, we can call a Church numeral and pass in the Lisp function 1+ and an initial value of 0 to convert a Church numeral into a Lisp integer, e.g.:
? (define cn_three (cn_add1 (cn_add1 (cn_add1 cn_0))))
#<COMPILED-LEXICAL-CLOSURE THREE>
? (cn_three '1+ 0)
3
We can use this property of Church numerals to define arithmetic on them without using a Y combinator:
(define cn_+ (λ (m n) (m cn_add1 n)))
(define cn_* (λ (m n) (m (cn_+ n) cn_0)))
Notice how “natural” these definitions feel: adding M and N is just M iterations of CN_ADD1 applied to N. Multiplying M and N is just M iterations of adding N applied to zero.
Let’s also define two little utilities to convert back and forth between Churchnums and Lisp integers:
(defun cn (n) (if (zerop n) cn_0 (cn_add1 (cn (1- n)))))
(defun pcn (cn) (funcall cn '1+ 0))
Now let’s test our Churchnum math:
? (pcn (cn_+ (cn 7) (cn 9)))
16
? (pcn (cn_* (cn 7) (cn 9)))
63
We could at this point go on to define a recursive factorial using Church numerals in the same way that we did using pairnums, but we still have two missing pieces: we need to be able to subtract 1, and we need to be able to test for zero. Figuring out how to do these things is not trivial, and don’t really need them anyway because we can compute a factorial by counting up to N instead of down from N, and that is the natural way to do things using Church numerals anyway. But you might want to see if you can figure out how to do it yourself. Your lambda-fu will be stronger if you go through this exercise. But this tutorial is already getting pretty long, so I’m going to skip that part.
Instead, let’s try to construct a function F such that (N F 1) is the factorial of N. What makes this tricky is that we have to keep track of two intermediate values, not just one: the partial factorial that we have computed so far, and the value of N that we need to multiply in at the current step. But lambda-calculus functions can only return a single value. Fortunately, we have already built pairs back in section 4. We can use pairs to package two values as one. So we will put the value of N in the LHS of a pair, and N! in the RHS:
(define cn_factorial_step
(λ (pair)
(_pair (cn_add1 (_lhs pair))
(cn_* (_lhs pair) (_rhs pair)))))
(define cn_0 (λ (x f) x))
(define cn_1 (cn_add1 cn_0))
(define cn_factorial
(λ (n) (_rhs (n cn_factorial_step (_pair cn_1 cn_1)))))
Again, notice how “natural” this all feels. CN_FACTORIAL is (the right hand side of) N repetition of CN_FACTORIAL_STEP, which takes a pair (N, N!) and returns the pair (N+1, N*N!).
Let’s expand that out to see what we have wrought:
(λ (n)
((λ pair (pair (λ (l r) r)))
(n
(λ (pair)
((λ (l r) (λ (selector) (selector l r)))
((λ n (λ (f x) (f (n f x))))
((λ pair (pair (λ (l r) l))) pair))
((λ (m n)
(m ((λ (m n) (m (λ n (λ (f x) (f (n f x)))) n)) n)
(λ (f x) x)))
((λ pair (pair (λ (l r) l))) pair)
((λ pair (pair (λ (l r) r))) pair))))
((λ (l r) (λ (selector) (selector l r)))
((λ n (λ (f x) (f (n f x)))) (λ (f x) x))
((λ n (λ (f x) (f (n f x)))) (λ (f x) x))))))
That’s still pretty messy, but it’s a whole lot better than before. Notice that all of our Y combinators are gone; with Church numerals we don’t need them any more. Furthermore, all of our __if cheats are gone too! This really is a pure lambda calculus factorial! But does it work? Well, let’s see:
? (time (pcn (cn_fact (cn 7))))
(PCN (CN_FACT (CN 7)))
took 4,430 microseconds (0.004430 seconds) to run.
2,702 microseconds (0.002702 seconds, 60.99%) of which was spent in GC.
During that period, and with 4 available CPU cores,
6,208 microseconds (0.006208 seconds) were spent in user mode
363 microseconds (0.000363 seconds) were spent in system mode
1,381,040 bytes of memory allocated.
33 minor page faults, 0 major page faults, 0 swaps.
5040
Looking promising: 4ms to compute the factorial of 7, compared to 24 using pairnums, so another factor of 6 performance improvement. Alas…
? (pcn (cn_fact (cn 8)))
> Error: Stack overflow on temp stack.
One step forward, one step back. And unfortunately we have also gotten to the point where the next step is not so easy to explain or understand. This particular problem took me several hours to figure out and fix. The problem turns out to be in the definition of CN_ADD1:
(define cn_add1 (λ n (λ (f x) (f (n f x)))))
Because we are applying the last f after the first n f’s, this version of cn_add1 is not tail-recursive. We can make it tail-recursive by applying the extra f first instead of last:
(define cn_add1 (λ n (λ (f x) (n f (f x)))))
If you didn’t understand that, don’t worry about it. Tail recursion is another topic that would take us fair afield, and it doesn’t really matter. Just take my word for it for now.
In any event, having made this fix, we can now for the first time compute the factorial of 10:
? (time (pcn (cn_fact (cn 10))))
(PCN (CN_FACT (CN 10)))
took 4,598,543 microseconds (4.598543 seconds) to run.
2,715,376 microseconds (2.715376 seconds, 59.05%) of which was spent in GC.
During that period, and with 4 available CPU cores,
4,324,991 microseconds (4.324991 seconds) were spent in user mode
332,731 microseconds (0.332731 seconds) were spent in system mode
1,427,101,264 bytes of memory allocated.
65,028 minor page faults, 0 major page faults, 0 swaps.
3628800
A tad on the slow side still, but personally I think the fact that it works at all is pretty amazing.
In the next section we’re going to take this to a whole ‘nuther level.
10. Extreme optimization
To give credit where credit is due, I did not figure out any of the content of this section on my own. Everything from here on out is due to Bertram Felgenhauer.
First, let us recall our current baseline, which I have rewritten here as a single block of code:
(λ n
(_rhs
(n (λ (pair)
(_pair (cn_add1 (_lhs pair))
(cn_* (_lhs pair) (_rhs pair))))
(_pair (cn_add1 cn_0) (cn_add1 cn_0)))))
Our first optimization is to use the fact that we have implemented a pair as a function that calls a selector function with the lhs and rhs as arguments. Because we want to operate on both the lhs and the rhs at the same time, we can bypass these selectors and just call the pair on a function that does the computation we want to perform:
(λ n
(_rhs
(n (λ (pair)
(pair (lambda (l r) (_pair (cn_add1 l) (cn_* l r)))))
(_pair (cn_add1 cn_0) (cn_add1 cn_0))))))
Next, we switch to continuation-passing style so that we don’t have to create pairs at all. Instead, we pass a multi-argument continuation in as an argument to the loop:
(λ n
(n (λ (c i m) (c (cn_add1 i) (cn_* i m)))
(λ (i m) m)
(cn_add1 cn_0) (cn_add1 cn_0))))
Next we factor out cn_add1:
(funcall
(λ (add1 n)
(n (λ (c i m) (c (add1 i) (cn_* i m)))
(λ (i m) m)
(add1 cn_0) (add1 cn_0)))
cn_add1))
Because we are going for extreme compactness now, I’m going to rename the variable “add1” to “s” (for “successor”):
(funcall
(λ (s n)
(n (λ (c i m) (c (s i) (cn_* i m)))
(λ (i m) m)
(s cn_0) (s cn_0)))
cn_add1))
Now we expand out the references to cn_* and cn_add1. We could just do this directly, but it’s a bit tedious because cn_* is defined in terms of cn_+ which is defined in terms of cn_add1. Instead, I’m going to use a devilishly clever way of multiplying Church numerals:
(define cn_* (λ (m n) (λ (f) (m (n f)))))
Convincing yourself that this works is left as an exercise. The result is this:
(funcall
(λ (s n)
(n (λ (c i m) (c (s i) ((λ (m n) (λ (f) (m (n f)))) i m)))
(λ (i m) m)
(s cn_0) (s cn_0)))
(λ n (λ (f x) ((n f) (f x)))))
There is one last piece of low-lying fruit here, and that is that we can expand out the call to cn_*:
(funcall
(λ (s n)
(n (λ (c i m) (c (s i) (λ (f) (i (m f)))))
(λ (i m) m)
(s cn_0) (s cn_0)))
(λ n (λ (f x) ((n f) (f x)))))
And finally, we substitute the definition of cn_0 (and factor it out while we’re at it):
((λ (zero s n)
(n (λ (c i m)(c (s i) (λ (f) (i (m f)))))
(λ (i m) m)
(s zero) (s zero)))
(λ (f x) x)
(λ n (λ (f x) ((n f) (f x)))))
This is quite a respectable showing. It’s pure lambda calculus, it’s pretty small, and it computes the factorial of 10 in about 3.5 seconds. One might think that we are approaching the limits of what can be accomplished, but no. There is one final optimization we can make that will improve our performance by a factor of 5! (N.B.: the exclamation point is there for emphasis. It a factor of 5, not 5 factorial. )
Notice that the base case for our iteration is a pair of ones. We are constructing our ones by calling the successor function on the Church numeral zero. In other words:
one ::= (cn_add1 cn_0) == ((λ n (λ (f x) (n f (f x))))) (λ (f x) x)))
But we don’t need all this generality. We can just define one directly:
(define one (λ (f x) (f x)))
That will give us a performance improvement of about 1.5. The final factor of 3 (or so) comes from recalling that:
(λ (f x) (f x)) ::= (λ f (λ x (f x)))
In other words, ONE is not really a function of two arguments, it’s a function of one argument (f) that returns a closure which applies f to a second argument (x). But a closure that applies f to its argument is the same as the function f itself, so our definition of ONE is actually equivalent to the identity function! So we can substitute the identity function for the number one and build our base case from that:
((λ (one s n)
(n (λ (c i m)(c (s i) (λ (f) (i (m f)))))
(λ (i m) m)
one one))
(λ x x)
(λ n (λ (f x) ((n f) (f x)))))
And that version computes 10! in 680ms on my Macbook Pro.
We’re still not quite done, though at this point we are getting very near the end. Felgenhauer’s final version improves on this last one by only about 20% in terms of run-time performance, and the optimizations become progressively more difficult to explain. But just for completeness, I will include the final steps here as they were explained to me by Felgenhauer:
At this point, note that inside the step function, we have access to three values: the current values of 'c' and 'i', and also the return value of 'c (succ i) (mul i m)', which equals n!. This seems wasteful, so the next step is to accumulate the product in the return value instead. This works out because multiplication is commutative and associative. The result looks like this:
(λ n (n (λ (c i) (mul i (c (succ i))))
(λ i one)
one)))
The size [of the code] is now [quite small], but the transformation was non-trivial. The final step is a peephole optimization (after inlining 'mul') that exploits the shape of Church numerals; rather than having 'c' return a Church numeral, we return a repeated iteration of a fixed 'f'. This results in the final [form of the code]:
(λ (n f) (n (λ (c i) (i (c (succ i))))
(λ (i) f)
one)))
And, of course, having come this far, we have to take it up to 11:
? (time (pcn (fac (cn 11))))
(PCN (FAC (CN 11)))
took 6,771,754 microseconds (6.771754 seconds) to run.
3,017,167 microseconds (3.017167 seconds, 44.56%) of which was spent in GC.
During that period, and with 4 available CPU cores,
6,331,324 microseconds (6.331324 seconds) were spent in user mode
516,300 microseconds (0.516300 seconds) were spent in system mode
3,257,955,856 bytes of memory allocated.
5 minor page faults, 0 major page faults, 0 swaps.
39916800
11. Conclusions
Think about what we have done here: we have taken a mathematical formalism that at first glance looks like it would not be capable of doing anything practical and used it to compute the factorial of 10 (and even 11!) on a real machine in a few seconds. Not only that, but we have done this by compiling code in this formalism into x86 machine code, and we’ve done all that in a few dozen lines of code. All this was made possible by two key insights:
1. We can make storage locations and multi-argument functions out of closures
2. We can produce extreme performance improvements by abstracting away recursion through constructs like REDUCE and Church numerals.
But the real lesson here is that functional programming is not just an academic curiosity suitable only for the ivory tower. The theory of functional programming can produce practical results on real-world problems. This is not to say that computing factorials is all that useful, but if you can do factorials in the pure lambda calculus, imagine what you can do with an industrial-strength functional language like Haskell. | https://www.tefter.io/bookmarks/41527/readable | CC-MAIN-2020-05 | refinedweb | 8,478 | 69.92 |
poison alternatives and similar packages
Based on the "JSON" category.
Alternatively, view poison alternatives based on common mentions on social networks and blogs.
jason9.7 4.5 poison VS jasonA blazing fast JSON parser and generator in pure Elixir.
jsx9.5 2.8 poison VS jsxan erlang application for consuming, producing and manipulating json. inspired by yajl
ja_serializer9.3 0.0 poison VS ja_serializerJSONAPI.org Serialization in Elixir.
joken9.2 6.3 poison VS jokenElixir JWT library
jsonapi8.9 4.5 poison VS jsonapiJSON:API Serializer and Query Handler for Elixir
jose8.5 4.6 poison VS joseJSON Object Signing and Encryption (JOSE) for Erlang and Elixir
jsone8.4 6.0 poison VS jsoneErlang JSON library
json8.0 2.4 poison VS jsonNative JSON library for Elixir
exjsx7.6 0.0 poison VS exjsxJSON for Elixir, based on jsx.
json_web_token_ex7.6 0.0 poison VS json_web_token_exAn Elixir implementation of the JSON Web Token (JWT) Standard, RFC 7519
jazz6.3 0.0 poison VS jazzYet another library to handle JSON in Elixir.
exjson5.9 0.0 poison VS exjsonJSON parser and genarator in Elixir.
JSON-LD.ex5.8 0.4 poison VS JSON-LD.exAn implementation of JSON-LD for Elixir
jsxn4.8 0.0 poison VS jsxnjsx but with maps.
tiny4.5 0.0 poison VS tinyA small, fast and fully compliant JSON parser in Elixir
json_pointer1.8 0.0 poison VS json_pointerImplementation of RFC 6901 which defines a string syntax for identifying a specific value within a JSON document
jwalk1.4 0.0 poison VS jwalkHelper module for working with Erlang proplists, eep 18, map and mochijson-style JSON representations
JsonStreamEncoder1.3 0.0 poison VS JsonStreamEncoderStreaming encoder for JSON in elixir.
world_json1.0 0.0 poison VS world_jsonelixir module for the world in geo.json
jwtex0.9 0.0 poison VS jwtexA JWT encoding and decoding library in Elixir
Scout APM: A developer's best friend. Try free for 14-days
Do you think we are missing an alternative of poison or a related project?
Popular Comparisons
README
Poison
Poison is a new JSON library for Elixir focusing on wicked-fast speed without sacrificing simplicity, completeness, or correctness.
Poison takes several approaches to be the fastest JSON library for Elixir.
Poison uses extensive sub binary matching, a hand-rolled parser using several techniques that are known to benefit BeamAsm for JIT compilation, IO list encoding and single-pass decoding.
Poison benchmarks sometimes puts Poison's performance close to
jiffy and
usually faster than other Erlang/Elixir libraries.
Poison fully conforms to RFC 7159, ECMA 404, and fully passes the JSONTestSuite.
Installation
First, add Poison to your
mix.exs dependencies:
def deps do [{:poison, "~> 5.0"}] end
Then, update your dependencies:
$ mix deps.get
Usage
Poison.encode!(%{"age" => 27, "name" => "Devin Torres"}) #=> "{\"name\":\"Devin Torres\",\"age\":27}" Poison.decode!(~s({"name": "Devin Torres", "age": 27})) #=> %{"age" => 27, "name" => "Devin Torres"} defmodule Person do @derive [Poison.Encoder] defstruct [:name, :age] end Poison.encode!(%Person{name: "Devin Torres", age: 27}) #=> "{\"name\":\"Devin Torres\",\"age\":27}" Poison.decode!(~s({"name": "Devin Torres", "age": 27}), as: %Person{}) #=> %Person{name: "Devin Torres", age: 27} Poison.decode!(~s({"people": [{"name": "Devin Torres", "age": 27}]}), as: %{"people" => [%Person{}]}) #=> %{"people" => [%Person{age: 27, name: "Devin Torres"}]}
Every component of Poison (encoder, decoder, and parser) are all usable on
their own without buying into other functionality. For example, if you were
interested purely in the speed of parsing JSON without a decoding step, you
could simply call
Poison.Parser.parse.
Parser
iex> Poison.Parser.parse!(~s({"name": "Devin Torres", "age": 27}), %{}) %{"name" => "Devin Torres", "age" => 27} iex> Poison.Parser.parse!(~s({"name": "Devin Torres", "age": 27}), %{keys: :atoms!}) %{name: "Devin Torres", age: 27}
Note that
keys: :atoms! reuses existing atoms, i.e. if
:name was not
allocated before the call, you will encounter an
argument error message.
You can use the
keys: :atoms variant to make sure all atoms are created as
needed. However, unless you absolutely know what you're doing, do not do
it. Atoms are not garbage-collected, see
Erlang Efficiency Guide
for more info:
Atoms are not garbage-collected. Once an atom is created, it will never be removed. The emulator will terminate if the limit for the number of atoms (1048576 by default) is reached.
Encoder
iex> Poison.Encoder.encode([1, 2, 3], %{}) |> IO.iodata_to_binary "[1,2,3]"
Anything implementing the Encoder protocol is expected to return an IO list to be embedded within any other Encoder's implementation and passable to any IO subsystem without conversion.
defimpl Poison.Encoder, for: Person do def encode(%{name: name, age: age}, options) do Poison.Encoder.BitString.encode("#{name} (#{age})", options) end end
For maximum performance, make sure you
@derive [Poison.Encoder] for any
struct you plan on encoding.
Encoding only some attributes
When deriving structs for encoding, it is possible to select or exclude
specific attributes. This is achieved by deriving
Poison.Encoder with the
:only or
:except options set:
defmodule PersonOnlyName do @derive {Poison.Encoder, only: [:name]} defstruct [:name, :age] end defmodule PersonWithoutName do @derive {Poison.Encoder, except: [:name]} defstruct [:name, :age] end
In case both
:only and
:except keys are defined, the
:except option is
ignored.
Key Validation
According to RFC 7159 keys in a JSON object should be unique. This is enforced and resolved in different ways in other libraries. In the Ruby JSON library for example, the output generated from encoding a hash with a duplicate key (say one is a string, the other an atom) will include both keys. When parsing JSON of this type, Chromium will override all previous values with the final one.
Poison will generate JSON with duplicate keys if you attempt to encode a map
with atom and string keys whose encoded names would clash. If you'd like to
ensure that your generated JSON doesn't have this issue, you can pass the
strict_keys: true option when encoding. This will force the encoding to fail.
Note: Validating keys can cause a small performance hit.
iex> Poison.encode!(%{:foo => "foo1", "foo" => "foo2"}, strict_keys: true) ** (Poison.EncodeError) duplicate key found: :foo
Benchmarking
$ MIX_ENV=bench mix run bench/run.exs
Current Benchmarks
As of 2021-07-22:
- Amazon EC2 c5.2xlarge instance running Ubuntu Server 20.04:
License
Poison is released under the public-domain-equivalent 0BSD license.
*Note that all licence references and agreements mentioned in the poison README section above are relevant to that project's source code only. | https://elixir.libhunt.com/poison-alternatives | CC-MAIN-2021-43 | refinedweb | 1,073 | 50.73 |
An Introduction To JBoss RichFaces
Application
Creating a new project
- Select File/New/JSF Project
- For the project name, enter “richfaces-start”
- For JSF environment, select JSF 1.2, Facelets, RichFaces
- For a template, select one based on the Tomcat version you are using
- Click Finish to create the project
Creating the model class
As we are dealing with users, we are going to create a user model class.
In the JavaSource directory, create an example.model.User Java class and notice the package name:
package example.model;
public class User {
@Override
public String toString() {
return name + " " + email;
}
private String name;
private String email;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public User(String name, String email) {
super();
this.name = name;
this.email = email;
}
}
The class is very simple. Our application will list one or more of these users. Next, we are going to create a managed bean. The managed bean will basically be a model for the user interface we will be building shortly.
Creating the managed bean
In JavaSource, create an example.beans.UserBean Java class. Again, pay attention to the package name:
package example.beans;
import java.util.ArrayList;
import java.util.List;
import javax.annotation.PostConstruct;
import example.model.User;
public class UserBean {
private List <User>users;
public List <User>getUsers() {
return users;
}
@PostConstruct
public void init () {
users = new ArrayList <User>();
users.add (new User("Joe", "joe@gmail.com"));
users.add (new User("Charley", "charley@ymail.com"));
users.add (new User("John", "john@hotmail.com"));
users.add (new User("Greg", "greg@gmail.com"));
users.add (new User("Prescila", "prescila@aol.com"));
}
}
This class is also simple. A list of five users is created and place inside an ArrayList. The @PostConstruct annotation is useful for initializing properties. It guarantees that the annotated method will only be called once when the bean is created.
To make this bean a managed bean, we need to register it in a JSF configuration file.
- Open WebContent/WEB-INF/face-config.xml
- Switch to the Tree view
- Select Managed Beans and click Add...
- Keep the scope request
- For Class, enter the bean’s full Java class name, example.beans.UserBean
- For Name, enter or keep userBean
- Click Finish
We are ready to create the user interface.
Managed bean registration looks
like this:
<managed-bean>
<managed-bean-name>userBean</managed-bean-name>
<managed-bean-class>example.beans.UserBean</managed-bean-class>
<managed-bean-scope>request</managed-bean-scope>
</managed-bean>
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)? | http://java.dzone.com/articles/an-introduction-to-jboss-richf?page=3,2 | CC-MAIN-2014-35 | refinedweb | 448 | 52.26 |
Scala Records introduce a data type
Rec for representing record types. Records are convenient for accessing and manipulating semi-structured data. Records are similar in functionality to F# records and shapeless records, however, they do not impose an ordering on their fields. Most relevant use cases are:
- Manipulating large tables in big-data frameworks like Spark and Scalding
- Manipulating results of SQL queries
- Manipulating JSON, YAML, XML, etc.
Records are implemented using macros and completely blend in the Scala environment. With records:
- Fields are accessed with a path just like regular case classes (e.g.
rec.country.state)
- Type errors are comprehensible and elaborate
- Auto-completion in the Eclipse IDE works seamlessly
- Run-time performance is high due to specialization with macros
- Compile-time performance is high due to the use of macros
Quick StartQuick Start
To create a simple record run:
import records.Rec scala> val person = Rec("name" -> "Hannah", "age" -> 30) person: records.Rec{def name: String; def age: Int} = Rec { name = Hannah, age = 30 }
Fields of records can be accessed just like fields of classes:
if (person.age > 18) println(s"${person.name} is an adult.")
Scala Records allow for arbitrary levels of nesting:
val person = Rec( "name" -> "Hannah", "age" -> 30, "country" -> Rec("name" -> "US", "state" -> "CA"))
They can be explicitly converted to case classes:
case class Country(name: String, state: String) case class Person(name: String, age: String, country: Country) val personClass = person.to[Person]
As well as implicitly when the contents of
records.RecordConversions are imported:
import records.RecordConversions._ val personClass: Person = person
In case of erroneous access, type errors will be comprehensible:
scala> person.nme <console>:10: error: value nme is not a member of records.Rec{def name: String; def age: Int} person.nme ^
Errors are also appropriate when converting to case classes:
val person = Rec("name" -> "Hannah", "age" -> 30) val personClass = person.to[Person] <console>:13: error: Converting to Person would require the source record to have the following additional fields: [country: Country]. val personClass = person.to[Person] ^
Including Scala Records in Your ProjectIncluding Scala Records in Your Project
To include Scala Records in your SBT build please add:
libraryDependencies += "ch.epfl.lamp" %% "scala-records" % <version>
SupportSupport
It is "safe" to use Scala Records in your project. They cross-compile against all minor Scala versions after 2.10.2. We will give our best effort to fix all the bugs promptly until we find a more principal, and functioning, solution for accessing semi-structured data in Scala. For further details see this page.
Current LimitationsCurrent Limitations
For All Scala VersionsFor All Scala Versions
Record types must not be explicitly mentioned. In case of explicit mentioning the result will be a run-time exception. In
2.11.xthis would be detected by a warning. For example:
val rec: Rec { def x: Int } = Rec("x" -> 1) rec.x // throws an exception
Records will not display nicely in IntelliJ IDEA. IntelliJ IDEA does not support whitebox macros:
- Writing a custom implementation for IntelliJ would remove this limitation.
In the Eclipse debugger records can not be debugged when conversions to case classes are used. For this to work the IDE must to understand the behavior of implicit macros.
In the Eclipse debugger records display as their underlying data structures. If these structures are optimized it is hard to keep track of the fields.
For Scala 2.10.xFor Scala 2.10.x
- All record calls will fire a warning for a reflective macro call.
[warn] 109: reflective access of structural type member macro method baz should be enabled [warn] by making the implicit value scala.language.reflectiveCalls visible. [warn] row.baz should be (1.7)
To disable this warning users must introduce
import scala.language.reflectiveCalls in a scope or set the compiler option
-language:reflectiveCalls. 2. Least upper bounds (LUBs) of two records can not be found. Consequences are the following:
- If two queries return the same records the results can not be directly combined under a same type. For example,
List(Rec("a" -> 1), Rec("a" -> 2))will not be usable.
PerformancePerformance
Scala Records compile asymptotically faster and run asymptotically faster than type-based approaches to records (e.g.
HMaps). For up-to-date benchmarks check out this repo.
Helping Further DevelopmentHelping Further Development
In case you have any desires for new functionality, or find errors in the existing one, please report them in the issue tracker. We will gladly discuss further development and accept your pull requests.
ContributorsContributors
Scala Records are developed with love and joy in the Scala Lab at EPFL in collaboration with Michael Armbrust from Databricks. Main contributors are:
- Vojin Jovanovic (@vjovanov)
- Tobias Schlatter (@gzm0)
- Hubert Plocziniczak (@hubertp) | https://index.scala-lang.org/scala-records/scala-records/scala-records/0.4?target=_2.10 | CC-MAIN-2020-10 | refinedweb | 779 | 57.27 |
On Tue, Oct 07, 2003 at 08:39:48PM +0200, Thiemo Seufer wrote: > Date: Tue, 7 Oct 2003 20:39:48 +0200 > To: Ralf Baechle <ralf@linux-mips.org> > Cc: Brendan O'Dea <bod@debian.org>, 200215@bugs.debian.org, > Drew Scott Daniels <umdanie8@cc.UManitoba.CA>, > Colin Watson <cjwatson@debian.org>, > debian-mips@lists.debian.org, Atsushi Nemoto <anemo@mba.ocn.ne.jp> > Subject: Re: Bug#200215: some debug info... gdb and strace broken on casals? > Content-Type: text/plain; charset=us-ascii > From: Thiemo Seufer <ica2_ts@csv.ica.uni-stuttgart.de> > > Ralf Baechle wrote: > [snip] > > So basically I like Thiemo's suggestion for the fix. But - the purpose > > of the three unused 32-bit fields in struct msgid64_ds is dealing with > > the year 2038 problem. So maybe we should reorder fields like: > > > > [...] > > #if defined(CONFIG_MIPS32) && !defined(CONFIG_CPU_LITTLE_ENDIAN) > > unsigned long __unused1; > > __kernel_time_t msg_stime; > > #else if defined(CONFIG_MIPS32) && defined(CONFIG_CPU_LITTLE_ENDIAN) > > __kernel_time_t msg_stime; > > unsigned long __unused1; > > #else > > __kernel_time_t msg_stime; > > #endif > > [...] > > > > ? > > This looks good for the kernel side. > > > That would eventually permit extending fields to 64-bit and take care of > > endianess issues. > > > > Comments? > > I missed the other endianness. Appended is the version needed for glibc. Okay, I suggest you send this patch to Uli for libc and I'll prepare a patch for the kernel, will post here later. ... unless anybdy thinks this patch is going to cause breakage that should be avoided. We could be more graceful about compatibility but at least form my perspective that's not really worth the effort. Last chance to complain :-) Ralf | https://lists.debian.org/debian-glibc/2003/10/msg00148.html | CC-MAIN-2015-48 | refinedweb | 257 | 60.72 |
package Want; require 5.006; use Carp 'croak'; use strict; use warnings; require Exporter; require DynaLoader; our @ISA = qw(Exporter DynaLoader); our @EXPORT = qw(want rreturn lnoreturn); our @EXPORT_OK = qw(howmany wantref); our $ of course. Be warned that C<want('ARRAY')> is a B<very> different thing from C<wantarray()>. =head2 Item count Sometimes in list context the caller is expecting a particular number of items to be returned: my ($x, $y) = foo(); # foo is expected to return two items If you pass a number to the, C<want(2)>, C<want(100)>, C<want(1E9)> and so on will all return true; and so will C<want('Infinity')>. The C<howmany> function can be used to find out how many items are wanted. If the context is scalar, then C<want(1)> returns true and C<howmany()> returns 1. If you want to check whether your result is being assigned to a singleton list, you can say C<if (want('LIST', 1)) { ... }>. =head2 Boolean context. =head1 FUNCTIONS =over 4 =item want(SPECIFIERS) B<don't> want it to be true want(2, '!3'); # Caller wants exactly two items. want(qw'REF !CODE !GLOB'); # Expecting a reference that # isn't a CODE or GLOB ref. want(100, '!Infinity'); # Expecting at least 100 items, # but there is a limit. If the I<REF> keyword is the only parameter passed, then the type of reference will be returned. This is just a synonym for the C<wantref> function: it's included because you might find it useful if you don't want to pollute your namespace by importing several functions, and to conform to Damian Conway's suggestion in RFC 21. Finally, the keyword I<COUNT> can be used, provided that it's the only keyword you pass. Mixing COUNT with other keywords is an error. This is a synonym for the C<howmany> function. A full list of the permitted keyword is in the B<ARGUMENTS> section below. =item rreturn Use this function instead of C<return> from inside an lvalue subroutine when you know that you're in RVALUE context. If you try to use a normal C<return>, you'll get a compile-time error in Perl 5.6.1 and above unless you return an lvalue. (Note: this is no longer true in Perl 5.16, where an ordinary return will once again work.) =item lnoreturn Use this function instead of C<return> from inside an lvalue subroutine when you're in ASSIGN context and you've used C<want('ASSIGN')> to carry out the appropriate action. If you use C<rreturn> or C<lnoreturn>, then you have to put a bare C<return;> at the very end of your lvalue subroutine, in order to stop the Perl compiler from complaining. Think of it as akin to the C<1;> that you have to put at the end of a module. (Note: this is no longer true in Perl 5.16.) =item howmany() Returns the I C<want('COUNT')>. =item wantref() Returns the type of reference which the caller is expecting, or the empty string if the caller isn't expecting a reference immediately. The same as C<want('REF')>. =back =head1 EXAMPLES =head1 ARGUMENTS The permitted arguments to the C<want> function are listed below. The list is structured so that sub-contexts appear below the context that they are part of. =over 4 =item * VOID =item * SCALAR =over 4 =item * REF =over 4 =item * REFSCALAR =item * CODE =item * HASH =item * ARRAY =item * GLOB =item * OBJECT =back =item * BOOL =back =item * LIST =over 4 =item * COUNT =item * E<lt>numberE<gt> =item * Infinity =back =item * LVALUE =over 4 =item * ASSIGN =back =item * RVALUE =back =head1 EXPORT The C<want> and C<rreturn> functions are exported by default. The C<wantref> and/or C<howmany> functions can also be imported: use Want qw'want howmany'; If you don't import these functions, you must qualify their names as (e.g.) C<Want::wantref>. =head1 INTERFACE This module is still under development, and the public interface may change in future versions. The C<want> function can now be regarded as stable. I'd be interested to know how you're using this module. =head1 SUBTLETIES There are two different levels of B<BOOL> context. I<Pure> boolean context occurs in conditional expressions, and the operands of the C<xor> and C<!>/C<not> operators. Pure boolean context also propagates down through the C<&&> and C<||> operators. However, consider an expression like C<my $x = foo() && "yes">. The subroutine is called in I<pseudo>-boolean context - its return value isn't B<entirely> ignored, because the undefined value, the empty string and the integer 0 are all false. At the moment C<want('BOOL')> is true in either pure or pseudo boolean context. Let me know if this is a problem. =head1 BUGS * Doesn't work from inside a tie-handler. =head1 AUTHOR Robin Houston, E<lt>robin@cpan.orgE<gt> Thanks to Damian Conway for encouragement and good suggestions, and Father Chrysostomos for a patch. =head1 SEE ALSO =over 4 =item * L<perlfunc/wantarray> =item * Perl6 RFC 21, by Damian Conway. =back =head1 COPYRIGHT Copyright (c) 2001-2012, Robin Houston. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself. =cut | https://metacpan.org/release/Want/source/Want.pm | CC-MAIN-2019-43 | refinedweb | 895 | 64.51 |
Creating a Simple Live Chat Server with NestJS and WebSockets
By Josh Morony
When we are building applications, there are many instances where we want data updates from the server to display immediately. Perhaps we have a chat application and we want to display new messages to a user, or maybe we’ve built a game that needs to display an update to the user as soon as something happens on the server.
The problem with a typical client/server set up is that we would trigger a request from the client to load some data from a server when the application loads and it would pull in the latest set of data - for the sake of an example, let’s say we would load in all of the current chat messages from the server. Once that initial load and request has been made, what happens when somebody else updates the data on the server (e.g. when someone adds a new chat message)? Nobody would know that a new chat message has been added except the person who added it, so we need some way to check the data on the server after the initial request for data.
We could, for example, set up some code in our client-side application to check (or “poll”) the server every 10 seconds. Every 10 seconds the application would make a request to the server for the latest chat data, and load any new messages. This is a viable solution, but it’s not ideal. This approach has two glaring flaws which are:
- A lot of likely unnecessary requests are made to the server
- There is a potentially long delay before the user will see the new data (which would be especially annoying in a chat application)
This brings me to the topic of today’s tutorial, there is a much better way to handle this situation…
Introducing Web Sockets
The WebSocket API allows for event-based two-way communication between a browser and a server. If we consider our initial example of “polling” a server every 10 seconds to get new data to be like calling somebody on the phone every 10 seconds for updates, then using web sockets instead would be like just calling them once and keeping the call open in the background so either party can communicate instantly whenever required.
When using Web Sockets, an event could be triggered as soon as a new chat message is added by anybody, and any clients listening will instantly be notified. This means that the chat message that was added would show up near instantaneously for the other users.
NestJS has support for web sockets built-in, which makes use of the popular socket.io package. This makes setting up communications using the WebSocket API in NestJS rather simple.
In this tutorial, we are going to build a simple live chat application using NestJS and Ionic/Angular. We will use our NestJS server to:
- Broadcast any chat messages to any listening clients
- Notify when a new client connects
- Notify when a client disconnects
The server will just be used to relay information, it will be the responsibility of the clients to display the information. When we are done, the application should look something like this:
Before We Get Started
This tutorial assumes that you are already familiar with the basics of NestJS (and Ionic/Angular if you are using that on the front-end). If you need more tutorials on NestJS in general, I have more NestJS tutorials available here.
Although we are using Ionic on the front-end for this tutorial, it doesn’t particularly matter. You could use any front-end you like, but we will be covering specifically how to use the
ngx-socket-io package which makes it easier to use
socket.io in Angular. If you are not using Angular, then you would need to implement
socket.io in some other way in your application (the basics concepts remain the same).
1. Creating the NestJS Server
First, we are going to create our NestJS server. In order to use web sockets in your NestJS project, you will need to install the following package:
npm install --save @nestjs/websockets
With that package installed, we are going to create a new module called
ChatModule which will handle our web socket communication.
Run the following command to create a new module called Chat:
nest g module chat
Once the module has been generated, we are going to create a new file to implement a
@WebSocketGateway.
Create a file at src/chat/chat.gateway.ts and add the following:
import { WebSocketGateway, WebSocketServer, SubscribeMessage, OnGatewayConnection, OnGatewayDisconnect } from '@nestjs/websockets'; @WebSocketGateway() export class ChatGateway implements OnGatewayConnection, OnGatewayDisconnect { @WebSocketServer() server; users: number = 0; async handleConnection(){ // A client has connected this.users++; // Notify connected clients of current users this.server.emit('users', this.users); } async handleDisconnect(){ // A client has disconnected this.users--; // Notify connected clients of current users this.server.emit('users', this.users); } @SubscribeMessage('chat') async onChat(client, message){ client.broadcast.emit('chat', message); } }
This is basically everything we need to handle the communication for our application, so let’s break it down. First, we decorate our class with
@WebSocketGateway which is what will allow us to make use of the
socket.io functionality.
You will notice that this class also implements
OnGatewayConnection and
OnGatewayDisconnect. This isn’t strictly required, but since we want to keep track of clients connecting and disconnecting we implement the
handleConnection() and
handleDisconnect() hooks. These will be triggered every time a client connects or disconnects.
We set up a member variable called
server that is decorated with
@WebSocketServer which will give us access to the server instance. We can then use this to trigger events and send data to connected clients. We make use of this in our
handleConnection and
handleDisconnect hooks where we are incrementing or decrementing the total number of users, and then notifying any connected clients of the new number of users.
The
@SubscribeMessage decorator is used to listening to incoming messages. If we want to send a
chat event from our client to the server, then we need to decorate the function that will handle that event with
@SubscribeMessage('chat'). This function (
onChat in this case) has two parameters: the first (which we are calling
client) will be a reference to the socket instance, and the second (which we are calling
message) will be the data sent by the client.
Since we want all connected clients to know about this chat message when it is received, we trigger a broadcast to those clients with
client.broadcast.emit('chat', message). Then, any clients listening for the
chat event would receive this data instantly.
We are almost done with the server, but before our gateway will start listening we need to add it to the
providers in our chat module.
Modify src/chat/chat.module.ts to reflect the following:
import { Module } from '@nestjs/common'; import { ChatGateway } from './chat.gateway'; @Module({ providers: [ ChatGateway ] }) export class ChatModule {}
Remember, when testing you will need to make sure your server is running with:
npm run start
2. Creating the Client-Side Application
With the server complete, now we just need to set up some kind of front-end client to interact with it. As I mentioned, we will be creating an example using Ionic/Angular but you could use any front-end you like. You could even have multiple different front ends interacting with the same server.
Install the following package in your Ionic/Angular project:
npm install --save ngx-socket-io
This package just implements socket.io in an Angular friendly way. As well as installing it, we will also need to configure the package in our root module.
Make sure to configure the
SocketIoModuleas shown in the app.module.ts file below: { SocketIoModule, SocketIoConfig } from 'ngx-socket-io'; const config: SocketIoConfig = { url: '', options: {}}; @NgModule({ declarations: [AppComponent], entryComponents: [], imports: [ BrowserModule, IonicModule.forRoot(), AppRoutingModule, SocketIoModule.forRoot(config) ], providers: [ StatusBar, SplashScreen, { provide: RouteReuseStrategy, useClass: IonicRouteStrategy } ], bootstrap: [AppComponent] }) export class AppModule {}
We use a
url of as this is where our NestJS server is running. In a production environment, you would change this to the location of wherever your NestJS server is running. Now, we are going to focus on implementing a chat service to handle most of the logic, and afterward, we will create a simple interface to send and receive chats.
Run the following command to create a chat service:
ionic g service services/chat
Modify src/app/services/chat.service.ts to reflect the following:
import { Injectable } from '@angular/core'; import { Socket } from 'ngx-socket-io'; @Injectable({ providedIn: 'root' }) export class ChatService { constructor(private socket: Socket) { } sendChat(message){ this.socket.emit('chat', message); } receiveChat(){ return this.socket.fromEvent('chat'); } getUsers(){ return this.socket.fromEvent('users'); } }
Once again, there isn’t actually all that much code required to get this communication working. We inject
Socket from the
ngx-socket-io package to get a reference to the socket instance, and then we utilise that our three methods.
The
sendChat function will allow us to send a message to the server. We call the
emit method with a value of
chat which means that it is going to trigger the function on the server decorated with
@SubscribeMessage('chat'). The server will receive the message sent, and then rebroadcast that to any other clients listening.
Whilst the
sendChat function handles sending data to the server, the other two methods handle receiving data from the server. The
receiveChat method listens to the
chat event, which means that every time this line is triggered on the server:
client.broadcast.emit('chat', message);
The
receiveChat method will get that message data. Similarly, the
getUsers method listens to the
users event, and every time this line is triggered on the server:
this.server.emit('users', this.users);
The
getUsers method will receive the total number of active users. With these methods in place, let’s make use of them in one of our pages.
Modify src/app/home/home.page.ts to reflect the following:
import { Component, OnInit } from '@angular/core'; import { ChatService } from '../services/chat.service'; @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) export class HomePage implements OnInit { public users: number = 0; public message: string = ''; public messages: string[] = []; constructor(private chatService: ChatService){ } ngOnInit(){ this.chatService.receiveChat().subscribe((message: string) => { this.messages.push(message); }); this.chatService.getUsers().subscribe((users: number) => { this.users = users; }); } addChat(){ this.messages.push(this.message); this.chatService.sendChat(this.message); this.message = ''; } }
Your page might not look exactly like this, but the example above is the gist of what needs to happen. In our
ngOnInit hook we subscribe to both the
receiveChat and
getUsers methods which both return an observable. Every time a communication is received from the server, the observable will emit the new data and we can do something with it. In this case, we are pushing any new messages into the
messages array, and any time we receive the total number of users from the server we set the
users member variable to that value.
The only other method we have here is
addChat which simple passes on whatever message the user typed to the
sendChat method in the chat service (as well as adding the message to the messages array, since this client won’t receive a broadcast for its own chat message).
All of the main functionality has been completed now, but let’s create a nice chat interface to test it out in.
Modify src/app/home/home.page.html to reflect the following:
<ion-header no-border> <ion-toolbar <ion-title>Live Chat</ion-title> </ion-toolbar> </ion-header> <ion-content> <ion-card> <ion-card-content> There are currently <strong>{{users}}</strong> users online. Start chatting! </ion-card-content> </ion-card> <ion-list <ion-item * {{message}} </ion-item> </ion-list> </ion-content> <ion-footer> <ion-toolbar> <textarea spellcheck="true" autoComplete="true" autocorrect="true" rows="1" class="chat-input" [(ngModel)]="message" placeholder="type message..." (keyup.enter)="addChat()"> </textarea> <ion-buttons <ion-button (click)="addChat()" slot="end" class="send-chat-button"> <ion-icon</ion-icon> </ion-button> </ion-buttons> </ion-toolbar> </ion-footer>
Modify src/app/home/home.page.scss to reflect the following:
textarea { width: calc(100% - 20px); margin-left: 10px; background-color: #fff; font-size: 1.2em; resize: none; border: none; }
3. Test with Multiple Clients
Now, all we need to do is test it in the browser! This is a little bit awkward because we are going to need multiple clients in order to test the web socket communication. How you achieve this might depend on what kind of front-end you are using, but if you are using Ionic/Angular it is simple enough.
First, make sure your NestJS server is running with:
npm run start
Next, in a separate terminal window, serve your Ionic application:
ionic serve
The, open up another terminal window, and serve your Ionic application again:
ionic serve
You should now have two instances of your Ionic application running on two separate ports in the browser. You should be able to enter a chat message into either instance, and it will immediately show up in the other. At this point, the application should look something like this:
Summary
This is a very simplistic/bare-bones example, but it demonstrates the basic usage of web sockets. In a more typical application, you might want to extend this to perhaps store the messages in a database like MongoDB - you could still use a similar set up where you broadcast new chat messages to the clients, but you would then also store the messages in the database.
If you’d like to dive further into using web sockets in NestJS, there is a great complex chat application example available here and you can also find a lot more information in the NestJS documentation for web sockets. | https://www.joshmorony.com/creating-a-simple-live-chat-server-with-nestjs-websockets/ | CC-MAIN-2020-10 | refinedweb | 2,320 | 50.46 |
BBC micro:bit
Button SHIM
Introduction
The button SHIM is another Raspberry Pi accessory from the UK electronics company Pimoroni. Like the other accessories, it is meant to fit on the 40-pin GPIO header strip of the Raspberry Pi. SHIMs are designed to be smaller and thinner than other accessories so that they can be combined with HATs and pHATs. With this SHIM, you get 5 miniature push buttons and a single APA102 pixel. Whilst this not sound like a lot of kit, the bonus is that these components are connected up to a TCA9554A port expander that can be controlled via i2c. It is reasonably priced at £6 and is a neat board to use when you need some inputs and an indicator LED but don't have the GPIO to spare in your project.
The photograph shows the button SHIM connected to the micro:bit on a 4tronix Bit:2:Pi. This is by far the easiest way to connect and power Raspberry Pi accessories on the micro:bit. You can also access the unused micro:bit pins and tap into the battery or USB power. The button SHIM used here is from Will's extensive collection.
Like all Pimoroni products, it is a designed extremely well. Having been designed for the Raspberry Pi, you get best value when you use it in that context with the nicely written library. 5 buttons is a nice round number to have for some of the ways you might interact with the 5x5 LED matrix. I've done quite a few projects where having one button per column would make the user interaction a little more direct.
Circuit
You are connecting via i2c. On the Bit:2:pi, you just have to make sure that you have the jumpers on the i2c pins and are providing battery/USB power. Otherwise, you want the following,
Programming
A lot of work has gone into the Pimoroni library for this, much of it oriented towards the capabilities of the Pi. It was a bit easier to build this up from the basics and learn about the port expander.
To work with the TCA9554A, we need to use 3 main registers. We'll deal with these in reverse.
0x03
This is the configuration register. The byte that you write to this register determines the direction, input or output, for the pins. We write the byte 0x1F, which is 00011111 in binary. This configures the first 5 pins on the port expander (P0-P4) as inputs. P5 to P7 are left as outputs. P6 is the clock pin for the APA102 pixel and P7 is the data pin.
0x02
This is the polarity register. The bits of the byte that you write determine whether or not the outputs signals are inverted. We write a 0 to this to leave our pins as normal.
0x01
This is the output register. The byte that we write to this register sets the output pins high or low according to the bit pattern. When we first connect to the IC, we write a 0 to this register to ensure that all outputs are low.
Reading
To read from the inputs, we write a 0 to the IC and read the byte that is returned. The bits of this byte tell us the state of the buttons.
The Pixel
In order to control the APA102 LED, we need to toggle the clock pin a great many times, each time with the correct signal being sent on the data pin. If we had the pins on separately controllable GPIO, we would set the correct value on the data pin, then write a HIGH and then a LOW to the clock pin. Using the port expander, we write 2 bytes. In the first byte, we have the correct value for the data bit and make sure that the clock bit is set HIGH. We then send the same byte with the clock pin set to LOW. There is a preamble and postamble of bits that acts as a 'latch' for the data transmission.
When doing something like this with a port expander, you have to think about the states of all of the pins involved in the protocol each time you want change one of the outputs.
The following gives a basic class to use with the SHIM. There is a quick test of the LED and then an infinite loop to test the buttons. Pressing a button lights up the top row matrix LED that is opposite.
from microbit import * class btnshim: def __init__(self): self.ADDR = 0x3F self.write_reg(0x03,0x1F) self.write_reg(0x02,0) self.write_reg(0x01,0) def write_reg(self,reg,value): i2c.write(0x3F, bytes([reg,value])) def read_btns(self): i2c.write(0x3F, b'\x00') data = i2c.read(0x3F,1)[0] bits = [data >> i & 1 for i in range(5)] return bits def write_byte(self,byte): bits = [byte >> i & 1 for i in range(7,-1,-1)] for i in range(8): self.write_reg(0x01, (bits[i]<<7) + 0x40) self.write_reg(0x01, (bits[i]<<7)) def set_pixel(self,r,g,b): # sof for i in range(32): self.write_reg(0x01,0x40) self.write_reg(0x01,0) # brightness self.write_byte(0xEF) # colours self.write_byte(b) self.write_byte(g) self.write_byte(r) # eof for i in range(36): self.write_reg(0x01,0x40) self.write_reg(0x01,0) shim = btnshim() # pixel test shim.set_pixel(255,0,0) #red sleep(1000) shim.set_pixel(0,255,0) #green sleep(1000) shim.set_pixel(0,0,255) #blue sleep(1000) shim.set_pixel(255,255,255) #white sleep(1000) shim.set_pixel(0,0,0) #off # button test while True: btns =shim.read_btns() for x in range(5): if btns[x]==0: display.set_pixel(4-x,0,9) else: display.set_pixel(4-x,0,0) sleep(50)
Summary
The code here can be simplified a little to make it more compact. That makes it relatively easy to insert the board into any project that would benefit from a little extra input. Connect 4 would have much easier user interactions if you could select the column you want with a single press. If you add 1 extra input, you can use the state of that one to make it so that the 5 buttons on the SHIM relate to either columns or rows on the LED matrix. That gives a neater way of allowing a user to select individual pixels on the matrix, making some of your project ideas a little more usable. | http://www.multiwingspan.co.uk/micro.php?page=btnshim | CC-MAIN-2019-09 | refinedweb | 1,084 | 73.98 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
© 2010 StraVis IT Solutions Pvt Ltd.
Overview
If you try to change the SAP program “SAPMF02K”, you will be prompted to enter the access key for that object.
2
© 2010 StraVis IT Solutions Pvt Ltd.
Overview
3
© 2010 StraVis IT Solutions Pvt Ltd.
.User-Exits Function-Exits Menu-Exits Screen-Exits 4 © 2010 StraVis IT Solutions Pvt Ltd.
Information on existing User-Exits 5 © 2010 StraVis IT Solutions Pvt Ltd. .
. * *----------------------------------* include zxf05u01. call customer-function “001”. 6 © 2010 StraVis IT Solutions Pvt Ltd.Function-Exits SAP Original Code “SAPMF02K” Function Module INCLUDE Program function exit_sapmf02k_001.. *----------------------------------* * include zxf05u01. This INCLUDE program is where you will write the customer-specific code. endfunction. This INCLUDE program will not be overwritten with an SAP upgrade because it is not SAP original code..
7 . © 2010 StraVis IT Solutions Pvt Ltd.Call Customer-Function Versus Call Function Both of these CALL statements refer to the function module “EXIT_SAPMF02K_001”. call function ‘EXIT_SAPMF02K_001’ The CALL CUSTOMER-FUNCTION statement will only execute the function module if the module is activated.
. UPDATE LOG Vendor # Vendor name 8 © 2010 StraVis IT Solutions Pvt Ltd. you want to insert a record into an update log that contains the vendor number and name of the updated record.Business Case Scenario When the user updates a vendor record.
Steps to Coding a Function-Exit 1. Locate Function-Exit(s) 2. Go to Function Module 3. . Code in INCLUDE Program 5. Create INCLUDE Program 4. Activate Function-Exit 9 © 2010 StraVis IT Solutions Pvt Ltd.
.Locate Function-Exit(s) 10 © 2010 StraVis IT Solutions Pvt Ltd.
. search for the string “call customerfunction” in the main program to find all of the function-exit(s) in the program.Locate Function-Exit(s) In program “SAPMF02K”. 11 © 2010 StraVis IT Solutions Pvt Ltd.
Locate Function-Exit(s) Double-click anywhere on the call customer-function “001” statement to go to that line in the SAP program. . 12 © 2010 StraVis IT Solutions Pvt Ltd. there is only one function-exit at line 83 of “MF02KFEX”. In program “SAPMF02K”.
Go to Function Module Double-click on „001‟ of the CALL CUSTOMER-FUNCTION “001” statement in the SAP program to go to the function module “EXIT_SAPMF02K_001”. . 13 © 2010 StraVis IT Solutions Pvt Ltd.
Create INCLUDE Program Double-click on the INCLUDE ZXF05U01 statement in the function module to create the INCLUDE program. . 14 © 2010 StraVis IT Solutions Pvt Ltd.
*-----------------------------if sy-uname = ‟ DANTHON‟. © 2010 StraVis IT Solutions Pvt Ltd. .Code in INCLUDE Program 1 2 *-----------------------------* INCLUDE ZXF05U01 Write your code in this INCLUDE program. 6 15 endif.
Activating Function-Exit PROJECT 1 PROJECT 2 (can be activated/deactivated) (can be activated/deactivated) Enhancement 1 Enhancement 2 X X Enhancement 3 Function Exit Screen Exit Function Exit Function Exit 16 © 2010 StraVis IT Solutions Pvt Ltd. .
User-Exit Transactions CMOD SMOD 17 © 2010 StraVis IT Solutions Pvt Ltd. .
Transaction CMOD In transaction CMOD. type in the name of your project and press the CREATE pushbutton. 18 © 2010 StraVis IT Solutions Pvt Ltd. .
.Transaction CMOD Once you SAVE your project. 19 © 2010 StraVis IT Solutions Pvt Ltd. you can add as many enhancements as you want by pressing the SAP enhancements pushbutton.
. 20 © 2010 StraVis IT Solutions Pvt Ltd.Transaction CMOD Add the enhancements you want included in the project.
you need to ACTIVATE it. 21 © 2010 StraVis IT Solutions Pvt Ltd.Transaction CMOD After saving your project. .
you can display its components.Transaction SMOD With the name of the enhancement. . 22 © 2010 StraVis IT Solutions Pvt Ltd.
there is only one user-exit – a function-exit using the function module “EXIT_SAPMF02K_001”.Transaction SMOD In the case of enhancement “SAPMF02K”. 23 © 2010 StraVis IT Solutions Pvt Ltd. .
Create INCLUDE Program 4. Go to Function Module 3. Activate Function-Exit 24 © 2010 StraVis IT Solutions Pvt Ltd.Summary Function-Exits Menu-Exits Screen-Exits 1. Code in INCLUDE Program 5. . Locate Function-Exit(s) 2.
Business Add-Ins © 2010 StraVis IT Solutions Pvt Ltd. .
What is BAdi • • New SAP enhancement technique To accommodate user requirements not available / too specific to be included in the SAP standard Program • • Based on ABAP Objects – It has Interfaces & Methods Guaranteed upward compatibility of all Business Add-In interfaces – Release upgrades do not affect enhancement calls from within the standard software nor do they affect the validity of call interfaces 26 © 2010 StraVis IT Solutions Pvt Ltd. .
partners. They can be inserted into the SAP System to accommodate user requirements too specific to be included in the standard delivery. an application programmer predefines exit points in a source that allow specific industry sectors. 27 © 2010 StraVis IT Solutions Pvt Ltd. . User Exit • In User Exits.BADI vs. • The users of Business Add-Ins can customize the logic they need or use a standard logic if one is available. and customers to attach additional software to standard SAP source code without having to modify the original object.
industry solutions.BAdi & Customer-Exit • Though different enhancement technique. and as country versions.Definition and its Implementation .definition can either be SAP provided or user may also create it – no longer assumes a two-system infrastructure (SAP and customers) – allows multiple levels of software development (by SAP. BAdi has following distinct features – Uses Object oriented approach – Two parts . and the like) 28 © 2010 StraVis IT Solutions Pvt Ltd. . and customers. partners.
BAdi – where to find • • • Look for BAdi in IMG and in component hierarchy (using transaction SE18) Create own implementation of the add-in (complete coding for Methods) and activate Enhancement's active components are called at runtime. . 29 © 2010 StraVis IT Solutions Pvt Ltd.
.BAdi Definition (SE18) • To include Business Add-Ins in the program – Define an interface for the enhancement in the SAP menu under Tools-> ABAP Workbench -> Utilities -> Business Add-Ins -> Definition (transaction SE18) – Call the interface at the appropriate point in application program – Customers can then select the add-in and implement it according to their needs 30 © 2010 StraVis IT Solutions Pvt Ltd.
BAdi Implementation (SE19) • • • • ABAP Workbench ->Utilities -> Business Add-Ins -> Implementation (transaction SE19) Find the suitable Business Add-Ins present in system (Use IMG or Component hierarchy) Use Add-Ins Documentation to understand functionality & to decide Implement the Add-Ins – a class is created with the same interface – Finalize coding for the method • Implementations are discrete transport objects and lie within the namespace of the person or organization implementing them 31 © 2010 StraVis IT Solutions Pvt Ltd. .
At run time. What qualifies as a filter? A Data element Underlying domain may contain a maximum of 30 characters and must be of Character type The data element must Either have a search help with a search help parameter of the same type as the data element and this parameter must serve as both the import and export parameter or the element's domain must have fixed domain values or a value table containing a column with the same type as the data element Before implementing filter objects to a BADI. We need to deactivate all the implementation for that BADI. country-specific or companycode specific). separate implementation of the same Add-In can be created and activated. the specific implementation will be executed Possible through filter dependent BADI. 32 © 2010 StraVis IT Solutions Pvt Ltd. .g..Filter dependent BADI If enhancement needs to be different based on some parameter (e.
.Contd. Click on the F4 on the FILTER TYPE field name and enter some search criteria to find a relevant data element. 33 © 2010 StraVis IT Solutions Pvt Ltd..
.Contd. . • • Click on the save button and active the BADI definition. 34 © 2010 StraVis IT Solutions Pvt Ltd. Now to ensure that the interface parameters have been adjusted by the system click on the interface tab and then double click on any method to see the list of parameters.
The filter value acts like a condition. Return to the initial screen of the implementation and define a filter value by clicking on the INSERT ROW button under the filter section. 35 © 2010 StraVis IT Solutions Pvt Ltd. • And as you can see from the screen shot below the new parameter list is automatically available in the implementation part.Contd.. • Now lets use this filter type and make changes to the program. . Once done click on save and activate the implementation. Select the first implementation and click on the continue button You will notice that now the implementation is capable of adding filters for the user names. You can click on the INSERT ROW button to add any new filters. Only if. Select the menu IMPLEMENTATION -> DISPLAY. • • • The system will display all the implementation if it has more than one implementation. during the runtime the filter value matches the method will get executed. • • Click on the save and activate the method.
Thank you 36 © 2010 StraVis IT Solutions Pvt Ltd. . | https://www.scribd.com/document/55516020/User-Exits-Badi | CC-MAIN-2018-05 | refinedweb | 1,490 | 58.58 |
Step One: Please in future post code using the (CODE) button, which gives lots of good effect for very very little effort. (Please also look for the 'thread is solved' link at the bottom and after the thread is really solved be sure to mark it solved: That, too, gives good effect for very little effort.
The compiler sees thing in the order they are shown in the file. If
main() needs to call
readmiles() then
readmiles() needs to be at least declared (if not defined) before
main is seen. For this particular program, just re-order things. For larger programs, you will learn to create a header file that declares functions, classes, etc; then
#include that header before the code that uses the declared "things". The body of the function (the implementation, also called definition) can be seen later as long as the linker can find it somewhere.
or
#include <iostream> using namespace std; int readmiles( ); double calcKM(int numMiles);; }
I think this would work. I don't know wether you would need to use void with readmiles?? | https://www.daniweb.com/software-development/cpp/threads/355404/calling-functions-in-beginning-c- | CC-MAIN-2015-32 | refinedweb | 179 | 76.96 |
Using Amazon CloudWatch Alarms
You can create a CloudWatch alarm that watches a single CloudWatch metric or the result of a math expression based on CloudWatch metrics..
Note
CloudWatch doesn't test or validate the actions that you specify, nor does it detect any Amazon EC2 Auto Scaling or Amazon SNS errors resulting from an attempt to invoke nonexistent actions. Make sure that your actions exist.
Alarm States
An alarm has the following possible states:
OK—The metric or expression is within the defined threshold.
ALARM—The metric or expression is outside of the defined threshold.
INSUFFICIENT_DATA—The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
Evaluating an Alarm
When you create an alarm, you specify three settings to enable CloudWatch to evaluate when to change the alarm state:
Period is the length of time to evaluate the metric or expression to create each individual data point for an alarm. It is expressed in seconds. If you choose one minute as the period, there is one datapoint every minute.
Evaluation Period is the number of the most recent periods, or data points, to evaluate when determining alarm state.
Datapoints to Alarm is the number of data points within the evaluation period that must be breaching to cause the alarm to go to the
ALARMstate. The breaching data points do not have to be consecutive, they just must all be within the last number of data points equal to Evaluation Period.
In the following figure, the alarm threshold is set to three units. The alarm is
configured to go to the
ALARM state and both Evaluation
Period and Datapoints to Alarm are 3. That is, when all
three datapoints in the most recent three consecutive periods are above the threshold,
the
alarm goes to the
ALARM state. In the figure, this happens in the third through
fifth time periods. At period six, the value dips below the threshold, so one of the
periods
being evaluated is not breaching, and the alarm state changes to
OK. During the
ninth time period, the threshold is breached again, but for only one period. Consequently,
the
alarm state remains
OK.
When you configure Evaluation Period and Datapoints to Alarm as different values, you are setting an "M out of N" alarm. Datapoints to Alarm is ("M") and Evaluation Period is ("N"). The evaluation interval is the number of datapoints multiplied by the period. For example, if you configure 4 out of 5 datapoints with a period of 1 minute, the evaluation interval is 5 minutes. If you configure 3 out of 3 datapoints with a period of 10 minutes, the evaluation interval is 30 minutes.
Configuring How CloudWatch Alarms Treat Missing Data
Sometimes some data points for a metric with an alarm do not get reported to CloudWatch. For example, this can happen when a connection is lost, a server goes down, or when a metric reports data only intermittently by design.
CloudWatch enables you to specify how to treat missing data points when evaluating
an alarm.
This can help you configure your alarm to go to the
ALARM state when appropriate
for the type of data being monitored. You can avoid false positives when missing data
does
not indicate a problem.
Similar to how each alarm is always in one of three states, each specific data point reported to CloudWatch falls under one of three categories:
Not breaching (within the threshold)
Breaching (violating the threshold)
Missing
For each alarm, you can specify CloudWatch to treat missing data points as any of the following:
good (notBreaching the threshold—Missing data points are treated as being within the threshold
bad (breaching the threshold)—Missing data points are treated as breaching the threshold
ignore—The current alarm state is maintained
missing—The alarm does not consider missing data points when evaluating whether to change state
The best choice depends on the type of metric. For a metric that continually reports
data,
such as
CPUUtilization of an instance, you might want to treat missing data
points as
breaching, because they may indicate that something is wrong. But for a
metric that generates data points only when an error occurs, such as
ThrottledRequests in Amazon DynamoDB, you would want to treat missing data as
notBreaching. The default behavior is
missing.
Choosing the best option for your alarm prevents unnecessary and misleading alarm condition changes, and also more accurately indicates the health of your system.
How Alarm State is Evaluated When Data is Missing
No matter what value you set for how to treat missing data, when an alarm evaluates whether to change state, CloudWatch attempts to retrieve a higher number of data points than specified by Evaluation Periods. The exact number of data points it attempts to retrieve depends on the length of the alarm period and whether it is based on a metric with standard resolution or high resolution. The timeframe of the data points that it attempts to retrieve is the evaluation range.
Once CloudWatch retrieves these data points, the following happens:
If no data points in the evaluation range are missing, CloudWatch evaluates the alarm based on the most recent data points collected.
If some data points in the evaluation range are missing, but the number of existing data points retrieved is equal to or more than the alarm's Evaluation Periods, CloudWatch evaluates the alarm state based on the most recent existing data points that were successfully retrieved. In this case, the value you set for how to treat missing data is not needed and is ignored.
If some data points in the evaluation range are missing, and the number of existing data points that were retrieved is lower than the alarm's number of evaluation periods, CloudWatch fills in the missing data points with the result you specified for how to treat missing data, and then evaluates the alarm. However, any real data points in the evaluation range, no matter when they were reported, are included in the evaluation. CloudWatch uses missing data points only as few times as possible.
In all of these situations, the number of datapoints evaluated is equal to the value of Evaluation Periods. If fewer than the value of Datapoints to Alarm are breaching, the alarm state is set to OK. Otherwise, the state is set to ALARM.
Note
A particular case of this behavior is that CloudWatch alarms may repeatedly re-evaluate the last set of data points for a period of time after the metric has stopped flowing. This re-evaluation may cause the alarm to change state and re-execute actions, if it had changed state immediately prior to the metric stream stopping. To mitigate this behavior, use shorter periods.
The following tables illustrate examples of the alarm evaluation behavior. In the first table, Datapoints to Alarm and Evaluation Periods are both 3. CloudWatch retrieves the 5 most recent data points when evaluating the alarm.
Column 2 shows how many of the 3 necessary data points are missing. Even though the most recent 5 data points are evaluated, only 3 (the setting for Evaluation Periods) are necessary to evaluate the alarm state. The number of data points in Column 2 is the number of data points that must be "filled in", using the setting for how missing data is being treated.
Columns 3-6 show the alarm state that would be set for each setting of how missing data should be treated, shown at the top of each column. In the data points column, 0 is a non-breaching data point, X is a breaching data point, and - is a missing data point.
In the second row of the preceding table, the alarm stays OK even if missing data is treated as breaching, because the one existing data point is not breaching, and this is evaluated along with two missing data points which are treated as breaching. The next time this alarm is evaluated, if the data is still missing it will go to ALARM, as that non-breaching data point will no longer be among the 5 most recent data points retrieved. In the fourth row, the alarm goes to ALARM state in all cases because there are enough real data points so that the setting for how to treat missing data does not need to be considered.
In the next table, the Period is again set to 5 minutes, and Datapoints to Alarm is only 2 while Evaluation Periods is 3. This is a 2 out of 3, M out of N alarm.
If data points are missing soon after you create an alarm, and the metric was being reported to CloudWatch before you created the alarm, CloudWatch retrieves the most recent data points from before the alarm was created when evaluating the alarm.
High-Resolution Alarms Publishing Custom Metrics.
Alarms on Math Expressions
You can set an alarm on the result of a math expression that is based on one or more CloudWatch metrics. A math expression used for an alarm can include as many as 10 metrics. Each metric must be using the same period.
For an alarm based on a math expression, you can specify how you want CloudWatch to treat missing data points for the underlying metrics when evaluating the alarm.
Alarms based on math expressions cannot perform Amazon EC2 actions.
For more information about metric math expressions and syntax, see Using Metric Math.
Percentile-Based CloudWatch Alarms and Low Data Samples
When you set a percentile as the statistic for an alarm, you can specify what to do when there is not enough data for a good statistical assessment. You can choose to have the alarm evaluate the statistic anyway and possibly change the alarm state. Or, you can have the alarm ignore the metric while the sample size is low, and wait to evaluate it until there is enough data to be statistically significant.
For percentiles between 0.5 and 1.00, this setting is used when there are fewer than 10/(1-percentile) data points during the evaluation period. For example, this setting would be used if there were fewer than 1000 samples for an alarm on a p99 percentile. For percentiles between 0 and 0.5, the setting is used when there are fewer than 10/percentile data points.
Common Features of CloudWatch Alarms
The following features apply to all CloudWatch alarms:
You can create up to 5000 alarms per region per AWS account. To create or update an alarm, you use the
PutMetricAlarmAPI action (
mon-put-metric-alarmcommand).
Alarm names must contain only ASCII characters.
You can list any or all of the currently configured alarms, and list any alarms in a particular state using
DescribeAlarms(
mon-describe-alarms). You can further filter the list by time range.
You can disable and enable alarms by using
DisableAlarmActionsand
EnableAlarmActions(
mon-disable-alarm-actionsand
mon-enable-alarm-actions).
You can test an alarm by setting it to any state using
SetAlarmState(
mon-set-alarm-state). This temporary state change lasts only until the next alarm comparison occurs.
You can create an alarm using
PutMetricAlarm(
mon-put-metric-alarm) before you've created a custom metric. For the alarm to be valid, you must include all of the dimensions for the custom metric in addition to the metric namespace and metric name in the alarm definition.
You can view an alarm's history using
DescribeAlarmHistory(
mon-describe-alarm-history). CloudWatch preserves alarm history for two weeks. Each state transition is marked with a unique timestamp. In rare cases, your history might show more than one notification for a state change. The timestamp enables you to confirm unique state changes.
The number of evaluation periods for an alarm multiplied by the length of each evaluation period cannot exceed one day. indicate that your resource is inactive, and may not necessarily mean that there is a problem. | https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html | CC-MAIN-2019-26 | refinedweb | 1,992 | 59.33 |
Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with our group's Anserini IR toolkit, which is built on Lucene. Retrieval using dense representations is provided via integration with Facebook's Faiss library.
Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections
With Pyserini, it's easy to reproduce runs on a number of standard IR test collections! A low-effort way to try things out is to look at our online notebooks, which will allow you to get started with just a few clicks.
Install via PyPI (requires Python 3.6+):
pip install pyserini
Sparse retrieval depends on Anserini, which is itself built on Lucene, and thus Java 11.
Dense retrieval depends on neural networks and requires a more complex set of dependencies.
A
pip installation will automatically pull in the 🤗 Transformers library to satisfy the package requirements.
Pyserini also depends on PyTorch and Faiss, but since these packages may require platform-specific custom configuration, they are not explicitly listed in the package requirements.
We leave the installation of these packages to you.
The software ecosystem is rapidly evolving and a potential source of frustration is incompatibility among different versions of underlying dependencies. We provide additional detailed installation instructions here.
If you're planning on just using Pyserini, then the
pip instructions above are fine.
However, if you're planning on contributing to the codebase or want to work with the latest not-yet-released features, you'll need a development installation.
For this, clone our repo with the
--recurse-submodules option to make sure the
tools/ submodule also gets cloned.
The
tools/ directory, which contains evaluation tools and scripts, is actually this repo, integrated as a Git submodule (so that it can be shared across related projects).
Build as follows (you might get warnings, but okay to ignore):
cd tools/eval && tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make && cd ../../.. cd tools/eval/ndeval && make && cd ../../..
Next, you'll need to clone and build Anserini.
It makes sense to put both
pyserini/ and
anserini/ in a common folder.
After you've successfully built Anserini, copy the fatjar, which will be
target/anserini-X.Y.Z-SNAPSHOT-fatjar.jar into
pyserini/resources/jars/.
As with the
pip installation, a potential source of frustration is incompatibility among different versions of underlying dependencies.
For these and other issues, we provide additional detailed installation instructions here.
You can confirm everything is working by running the unit tests:
python -m unittest
Assuming all tests pass, you should be ready to go!
Pyserini supports sparse retrieval (e.g., BM25 ranking using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well hybrid retrieval that integrates both approaches.
The
SimpleSearcher class provides the entry point for sparse retrieval using bag-of-words representations.
Anserini supports a number of pre-built indexes for common collections that it'll automatically download for you and store in
~/.cache/pyserini/indexes/.
Here's how to use a pre-built index for the MS MARCO passage ranking task and issue a query interactively:
from pyserini.search import SimpleSearcher searcher = SimpleSearcher.from_prebuilt_index('msmarco-passage') hits = searcher.search('what is a lobster roll?') for i in range(0, 10): print(f'{i+1:2} {hits[i].docid:7} {hits[i].score:.5f}')
The results should be as follows:
1 7157707 11.00830 2 6034357 10.94310 3 5837606 10.81740 4 7157715 10.59820 5 6034350 10.48360 6 2900045 10.31190 7 7157713 10.12300 8 1584344 10.05290 9 533614 9.96350 10 6234461 9.92200
To further examine the results:
# Grab the raw text: hits[0].raw # Grab the raw Lucene Document: hits[0].lucene_document
Pre-built Anserini indexes are hosted at the University of Waterloo's GitLab and mirrored on Dropbox. The following method will list available pre-built indexes:
SimpleSearcher.list_prebuilt_indexes()
A description of what's available can be found here. Alternatively, see this answer for how to download an index manually.
The
SimpleDenseSearcher class provides the entry point for dense retrieval, and its usage is quite similar to
SimpleSearcher.
The only additional thing we need to specify for dense retrieval is the query encoder.
from pyserini.dsearch import SimpleDenseSearcher, TctColBertQueryEncoder encoder = TctColBertQueryEncoder('castorini/tct_colbert-msmarco') searcher = SimpleDenseSearcher.from_prebuilt_index( 'msmarco-passage-tct_colbert-hnsw', encoder ) hits = searcher.search('what is a lobster roll') for i in range(0, 10): print(f'{i+1:2} {hits[i].docid:7} {hits[i].score:.5f}')
If you encounter an error (on macOS), you'll need the following:
import os os.environ['KMP_DUPLICATE_LIB_OK']='True'
The results should be as follows:
1 7157710 70.53742 2 7157715 70.50040 3 7157707 70.13804 4 6034350 69.93666 5 6321969 69.62683 6 4112862 69.34587 7 5515474 69.21354 8 7157708 69.08416 9 6321974 69.06841 10 2920399 69.01737
The
HybridSearcher class provides the entry point to perform hybrid sparse-dense retrieval:
from pyserini.search import SimpleSearcher from pyserini.dsearch import SimpleDenseSearcher, TctColBertQueryEncoder from pyserini.hsearch import HybridSearcher ssearcher = SimpleSearcher.from_prebuilt_index('msmarco-passage') encoder = TctColBertQueryEncoder('castorini/tct_colbert-msmarco') dsearcher = SimpleDenseSearcher.from_prebuilt_index( 'msmarco-passage-tct_colbert-hnsw', encoder ) hsearcher = HybridSearcher(dsearcher, ssearcher) hits = hsearcher.search('what is a lobster roll') for i in range(0, 10): print(f'{i+1:2} {hits[i].docid:7} {hits[i].score:.5f}')
The results should be as follows:
1 7157715 71.56022 2 7157710 71.52962 3 7157707 71.23887 4 6034350 70.98502 5 6321969 70.61903 6 4112862 70.33807 7 5515474 70.20574 8 6034357 70.11168 9 5837606 70.09911 10 7157708 70.07636
In general, hybrid retrieval will be more effective than dense retrieval, which will be more effective than sparse retrieval.
Another commonly used feature in Pyserini is to fetch a document (i.e., its text) given its
docid.
This is easy to do:
from pyserini.search import SimpleSearcher searcher = SimpleSearcher.from_prebuilt_index('msmarco-passage') doc = searcher.doc('7157715')
From
doc, you can access its
contents as well as its
raw representation.
The
contents hold the representation of what's actually indexed; the
raw representation is usually the original "raw document".
A simple example can illustrate this distinction: for an article from CORD-19,
raw holds the complete JSON of the article, which obviously includes the article contents, but has metadata and other information as well.
The
contents contain extracts from the article that's actually indexed (for example, the title and abstract).
In most cases,
contents can be deterministically reconstructed from
raw.
When building the index, we specify flags to store
contents and/or
raw; it is rarely the case that we store both, since that would be a waste of space.
In the case of the pre-built
msmacro-passage index, we only store
raw.
Thus:
# Document contents: what's actually indexed. # Note, this is not stored in the pre-built msmacro-passage index. doc.contents() # Raw document doc.raw()
As you'd expected,
doc.id() returns the
docid, which is
7157715 in this case.
Finally,
doc.lucene_document() returns the underlying Lucene
Document (i.e., a Java object).
With that, you get direct access to the complete Lucene API for manipulating documents.
Since each text in the MS MARCO passage corpus is a JSON object, we can read the document into Python and manipulate:
import json json_doc = json.loads(doc.raw()) json_doc['contents'] # 'contents' of the document: # A Lobster Roll is a bread roll filled with bite-sized chunks of lobster meat...
Every document has a
docid, of type string, assigned by the collection it is part of.
In addition, Lucene assigns each document a unique internal id (confusingly, Lucene also calls this the
docid), which is an integer numbered sequentially starting from zero to one less than the number of documents in the index.
This can be a source of confusion but the meaning is usually clear from context.
Where there may be ambiguity, we refer to the external collection
docid and Lucene's internal
docid to be explicit.
Programmatically, the two are distinguished by type: the first is a string and the second is an integer.
As an important side note, Lucene's internal
docids are not stable across different index instances.
That is, in two different index instances of the same collection, Lucene is likely to have assigned different internal
docids for the same document.
This is because the internal
docids are assigned based on document ingestion order; this will vary due to thread interleaving during indexing (which is usually performed on multiple threads).
The
doc method in
searcher takes either a string (interpreted as an external collection
docid) or an integer (interpreted as Lucene's internal
docid) and returns the corresponding document.
Thus, a simple way to iterate through all documents in the collection (and for example, print out its external collection
docid) is as follows:
for i in range(searcher.num_docs): print(searcher.doc(i).docid())
To build sparse (i.e., Lucene inverted indexes) on your own document collections, following the instructions below. To build dense indexes (e.g., the output of transformer encoders) on your own document collections, see instructions here. The following covers English documents; if you want to index and search multilingual documents, check out this answer.
Pyserini (via Anserini) provides ingestors for document collections in many different formats. The simplest, however, is the following JSON format:
{ "id": "doc1", "contents": "this is the contents." }
A document is simply comprised of two fields, a
docid and
contents.
Pyserini accepts collections comprised of these documents organized in three different ways:
So, the quickest way to get started is to write a script that converts your documents into the above format. Then, you can invoke the indexer (here, we're indexing JSONL, but any of the other formats work as well):
python -m pyserini.index -collection JsonCollection \ -generator DefaultLuceneDocumentGenerator \ -threads 1 \ -input integrations/resources/sample_collection_jsonl \ -index indexes/sample_collection_jsonl \ -storePositions -storeDocvectors -storeRaw
Three options control the type of index that is built:
-storePositions: builds a standard positional index
-storeDocvectors: stores doc vectors (required for relevance feedback)
-storeRaw: stores raw documents
If you don't specify any of the three options above, Pyserini builds an index that only stores term frequencies. This is sufficient for simple "bag of words" querying (and yields the smallest index size).
Once indexing is done, you can use
SimpleSearcher to search the index:
from pyserini.search import SimpleSearcher searcher = SimpleSearcher('indexes/sample_collection_jsonl') hits = searcher.search('document') for i in range(len(hits)): print(f'{i+1:2} {hits[i].docid:4} {hits[i].score:.5f}')
You should get something like the following:
1 doc2 0.25620 2 doc3 0.23140
If you want to perform a batch retrieval run (e.g., directly from the command line), organize all your queries in a tsv file, like here.
The format is simple: the first field is a query id, and the second field is the query itself.
Note that the file extension must end in
.tsv so that Pyserini knows what format the queries are in.
Then, you can run:
$ python -m pyserini.search --topics integrations/resources/sample_queries.tsv \ --index indexes/sample_collection_jsonl \ --output run.sample.txt \ --bm25 $ cat run.sample.txt 1 Q0 doc2 1 0.256200 Anserini 1 Q0 doc3 2 0.231400 Anserini 2 Q0 doc1 1 0.534600 Anserini 3 Q0 doc1 1 0.256200 Anserini 3 Q0 doc2 2 0.256199 Anserini 4 Q0 doc3 1 0.483000 Anserini
Note that output run file is in standard TREC format.
You can also add extra fields in your documents when needed, e.g. text features.
For example, the SpaCy Named Entity Recognition (NER) result of
contents could be stored as an additional field
NER.
{ "id": "doc1", "contents": "The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.", "NER": { "ORG": ["The Manhattan Project"], "MONEY": ["World War II"] } }
With Pyserini, it's easy to reproduce runs on a number of standard IR test collections!
Pyserini provides baselines for a number of datasets.
Anserini is designed to work with JDK 11. There was a JRE path change above JDK 9 that breaks pyjnius 1.2.0, as documented in this issue, also reported in Anserini here and here. This issue was fixed with pyjnius 1.2.1 (released December 2019). The previous error was documented in this notebook and this notebook documents the fix.
With v0.11.0.0 and before, Pyserini versions adopted the convention of X.Y.Z.W, where X.Y.Z tracks the version of Anserini, and W is used to distinguish different releases on the Python end. Starting with Anserini v0.12.0, Anserini and Pyserini versions have become decoupled. | https://openbase.com/python/pyserini | CC-MAIN-2021-39 | refinedweb | 2,183 | 51.55 |
definition and usage. the floor method rounds a number downwards to the nearest integer, and returns the result. if the passed argument is an integer, the value will not be rounded.
floor and ceiling functions. in mathematics and computer science, the floor function is the function that takes as input a real number and gives as output the greatest integer less than or equal to , denoted . similarly, the ceiling function maps to the least integer greater than or equal to , denoted . for example, and .
floor function in excel always rounds the value down towards zero and always returns a numeric value. floor in excel is in the list of the basic rounding functions in excel, though it works in a similar manner like mround function, the only difference is that it always pushes down the number to the nearest multiple of the significance.
the floor function returns the largest integer that is smaller than or equal to x. required header in the c , the required header for the floor function is:
a floor jack is a tool used to lift heavy objects such as cars and construction materials. it comes in different types and categories. each type comes in various forms and functions. it is often categorized based on the amount of pressure applied upon use.
in mathematics and computer science, the floor and ceiling functions map a real number to the greatest preceding or the least succeeding integer, respectively. floor x : returns the largest integer that is smaller than or equal to x i.e : rounds downs the nearest integer .
the floor function differs from the floor.math function in these ways: floor.math provides a default significance of 1, rounding to nearest integer. floor.math provides support for rounding negative numbers toward zero, away from zero . floor.math appears to use the absolute value of the
description. python number method floor returns floor of x - the largest integer not greater than x.. syntax. following is the syntax for floor method . import math math.floor x note this function is not accessible directly, so we need to import math module and then we need to call this function using math static object.. parameters. x this is a numeric expression.
look at the number line - floor: go to the next integer left of where you are. - ceiling: go to the next integer right of where you are. seven trustr limit: seven trustr than or equal to it. upper bound: more than or equal to it.
python math.floor function is a mathematical function which is used to return the floor of given number value x, a number value that is not greater than x. math.floor function in python math.floor function exists in standard math library of python programming .
floor function the sql floor function rounded up any positive or negative decimal value down to the next least integer value. sql distinct along with the sql floor function is used to retrieve only unique value after rounded down to the next least integer value depending on the column specified.
the floor function in c returns the largest possible integer value which is less than or equal to the given argument. floor prototype as of c 11 standard the floor function takes a single argument and returns a value of type double, float or long double type. this function is defined in <cmath> header file. floor parameters.
where the square brackets indicate the use of the floor function. the floor and ceil functions in a programming might use a combination of truncation rounding towards zero and comparisons to actually achieve their result.
import math math.floor x note this function is not accessible directly, so we need to import math module and then we need to call this function using math static object. parameters. x this is a numeric expression. return value. this method returns largest integer not greater than x. example. the following example shows the usage of floor method. | https://www.thehoosebelfast.co.uk/decking/7015-how-floor-function-works.html | CC-MAIN-2020-24 | refinedweb | 661 | 65.52 |
Struts 2.1.8 Login Form
Struts 2.1.8 Login Form
... to
validate the login form using Struts 2 validator framework.
About the example:
This example will display the login form to the user. If user enters Login
Name -
Java Struts 2 Programmer
Java Struts 2 Programmer
... Years
Keywords: Software developer, Java Programmer, Struts Programmer... ID: Java Struts 2 Programmer
java - Struts
java code for login page using struts without database ...but using... the form of jsp that we submit...plse help me.....i am in trouble... Hi
Insert into database
java - Struts
java when i hit submit button for login page ,,it displays a blank page....
plse slove it... Hi friend,
This is login form code
Insert into database
java - Struts
java This is my login jsp page::
function...;
}
User Login
User ID:
Submit
struts
java - Struts
,struts-config.xml,web.xml,login form ,success and failure page also...
code...java hi..,
i wrote login page with hardcodeted username... own code ....ok ..plse do it...
LoginForm.java
package java;
import
java - Struts
java wanted code for login page without datadase ..using flatfiles like excle,word etc...plse help me.... Hi
login form without database
Login form without Database
Struts - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... success
registrationForm.java for form beans
Success.jsp... with source code to solve the problem.
For read more information on Struts visit
java - Struts
/loginpage.jsp
login page ::
function validate(objForm...;
}
User Login
User ID:
struts-config.xml
struts - Struts
struts Hi,
i want to develop a struts application,iam using eclipse.... go to properties.
3. go to java build path.
4. then click on libraries
5... as such. Moreover war files are the compressed form of your projects
struts application - Struts
struts login application form code Hi, As i'm new to struts can anyone send me the coding for developing a login form application which involves a database search like checking user name in database
Setter methods of form bean class in Struts applications who calls the setter methods of form bean class in struts applications Am newly developed struts applipcation,I want to know how to logout the page using the strus
Please visit the following link:
Struts Login Logout Application
java - Struts
Java - Validating a form field in JSP using Struts I need an example in Java struts that validates a form field in JSP
STRUTS
STRUTS 1) Difference between Action form and DynaActionForm?
2) How the Client request was mapped to the Action file? Write the code and explain
java - Struts
java Hai friend,
How to desing Employee form in Struts?
And how the database connections will be do in the struts?
please forward answers as early as possible.
Thank you
Struts Architecture - Struts
the interactive form based applications with server pages.
Struts...Struts Architecture
Hi Friends,
Can u give clear struts architecture with flow. Hi friend,
Struts is an open source
Application |
Struts 2 |
Struts1 vs
Struts2 |
Introduction... directory Structure |
Writing Jsp, Java and Configuration files |
Struts 2 xml... |
Developing
Login Application in Struts 2 |
Running
and testing application
login application - Struts
login application Can anyone give me complete code of user login application using struts and database? Hello,
Here is good example of Login and User Registration Application using Struts Hibernate and Spring
Struts Projects
Struts Plugin
In this section we will write Hibernate Struts Plugin Java...;
and User Registration Application Using Struts... application.
Form
Login Screen our web application
Struts
web applications quickly and easily. Struts combines Java Servlets, Java Server... build web applications quickly and easily. Struts combines Java Servlets, Java... build web applications quickly and easily. Struts combines Java Servlets, Java
java - Struts
:
In Action Mapping
In login jsp
For read more information on struts visit to :
Thanks... friend.
what can i do.
In Action Mapping
In login jsp
what is struts? - Struts
of the Struts framework is a flexible control layer based on standard technologies like Java...:// is struts? What is struts?????how it is used n what
java - Struts
:
*)The form beans of DynaValidatorForm are created by Struts and you configure in the Struts config :
*)The Form Bean can be used...java how can i get dynavalidation in my applications using struts
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-batis - Struts
Java Struts How can we configure the declarative validations by using the DynaValidationAction Form Login Authentication
Struts Login Authentication Hi Sir,
Am doing a project in a struts 1.2,i want login authentication code fro that, only authenticated user can login into application,i am using back end as Mysql.so send me code as soon
Struts Tutorials - Jakarta Struts Tutorial
to develop Login Form in
Struts, validate the user and show error message... Tiles, Struts Validation Framework, Java Script validations are covered... of the Java Servlets in struts is to handle requests
made by the client
struts - Framework
struts Hi,roseindia
I want best example for struts Login... in struts... Hi Friend,
You can get login applications from the following links:
1)
2
login form
login form sir my next form consists logout button when i click on it it showing login form but next form window is not closing but the components...;
LoginDemo()
{
setTitle("Login Form");
setLayout(null);
label1 = new
java - Struts
java Hi,
I want full code for login & new registration page in struts 2
please let me know as soon as possible.
thanks,. Hi friend,
I am sending you a link. This link will help you. Please visit for more
Struts 2.2.1 - Struts 2.2.1 Tutorial
Configuring Actions in Struts application
Login Form Application... Validators
Login form validation example
Struts 2.2.1 Tags
Type... design pattern in Java
technology. Struts 2.2.1 provides Ajax support
Struts - Struts
Struts hi,
I am new in struts concept.so, please explain example login application in struts web based application with source code...://
I hope that, this link will help you
Java + struts - Struts
Java + struts my problem is : import multiple .xls workbooks... execute (ActionMapping mapping, ActionForm form, HttpServletRequest req...();
//ImportFileForm frm=(ImportFileForm) form;
try{
String
struts
struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...;gt;
<html:form
<pre>
Struts 2 Tutorial
Java
web application. Originally developed by the programmer and author....
Struts 2 Login Application
Developing Login Application in Struts 2
In this section we are going to develop login
validation problem in struts - Struts
validation problem in struts hi friends...
m working on one project using struts framework. so i made a user login form for user authentication...=false;
userloginbean af=(userloginbean)form;
String name
create a form using struts
create a form using struts How can I create a form for inputting text and uploading image using struts
Struts - Struts
Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2: Struts 2 UI
login to ir00co and postgres - Struts
login to ir00co and postgres thanks for the answer,but i have already a plug in(trilead ssh-2),but still i cannot login to it?can any one help me Running login page i am getting the error.In my jsp page i am giving the message ressource key where can i put the message key.
login jsp... Login
User ID:
Submit
Java Spring Framework Programmer
;
Position Vacant: Java Spring Framework Programmer... Framework, Java, Core Java, Struts, EJB, JSP
Contact Information:
Reference ID: Java Spring Framework Programmer
Struts - Struts
Struts Hello
I like to make a registration form in struts inwhich....
Struts1/Struts2
For more information on struts visit to :
Introduction to Struts 2 Framework
,
Java Beans, ResourceBundles, XML etc.
Struts 2 Framework is very....
Action Form
ActionForm class is mandatory in Struts 1
In Struts...Introduction to Struts 2 Framework - Video tutorial of Struts 2
In this 2 Validation Example
the form validation code in Struts 2 very easily. We will add the form
validation code in our login application.
For validation the login application java...
Finally we add the link in the index.html to
access the login form.
Java Hibernate 3 Programmer
Java Hibernate 3 Programmer
Position Vacant: Java Hibernate 3 Programmer
Job...;
Reference ID: Java Hibernate 3 Programmer.awt.event.*;
class Login
{
JButton SUBMIT;
JLabel label1,label2;
final JTextField text1,text2;
{
final JFrame f=new JFrame("Login Form...login sample java awt/swing code to make loginpage invisible after
saving form bean with Array of objects (collection) - Struts
saving form bean with Array of objects (collection) Hi all... thanks..:) I am facing problem to capture my array of objects(Order) in form bean into action class, the array i get from form is NULL..:( Let me explain
Struts - Struts
Struts is it correct to pass the form object as arg from action to service
Struts-jdbc
Struts-jdbc
<%@ taglib uri="/WEB-INF/struts-html.tld" prefix="html" %>
<%@ taglib uri="/WEB-INF/struts-bean.tld" prefix="bean" %>
<HEAD>
<%@ page
language="java"
contentType="text/html; charset
program code for login page in struts by using eclipse
program code for login page in struts by using eclipse I want program code for login page in struts by using eclipse
Running and Testing Struts 2 Login application
Running and Testing Struts 2 Login application
Running Struts 2 Login Example
In this section we will run... developed and
tested your Struts 2 Login application. In the next section we
How to display data in form using aryylist in struts - Java Beginners
How to display data in form using aryylist in struts Hi,
I want to display data using arraylist in struts pls help me
Its urgent
Hi... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert please tell me the process and when available experts please tell me ... window
And you chat with expert programmer
struts validation
struts validation I want to apply validation on my program.But i am.....
CreateGroup.jsp
<%@ page language="java" pageEncoding="ISO-8859-1"%>
<...;%@ include file="../common/header.jsp"%>
<%@ taglib uri="/WEB-INF/struts
Struts - Struts
,ActionForm form,HttpServletRequest request,HttpServletResponse response)
throws
structs - Struts
struts ssl login How to create struts ssl login
Textarea - Struts
characters.Can any one? Given examples of struts 2 will show how to validate... we have created five different files including three .jsp, one .java and one.xml...;%@ taglib prefix="s" uri="/struts-tags" %><html><head>
Login Form using Ajax
Login Form using Ajax
This section provides you an easy and complete
implementation of login form...;/action>
Develop a Login Form Using Ajax : The GUI of the
application
multiboxes - Struts
in javascript code or in struts bean. Hi friend,
Code to solve interview Question - Struts
struts interview question and answer java struts interview question and answer java. | http://www.roseindia.net/tutorialhelp/comment/14950 | CC-MAIN-2013-20 | refinedweb | 1,834 | 56.15 |
“I get very excited when we discover a way of making neural networks better – and when that’s closely related to how the brain works.”
Geoffrey Hinton
The Connection
Perhaps, the reason why convolutional neural networks have, time and again, proved themselves to be so adept at myriad vision tasks, is because they take their inspiration from one of the most evolved biological systems that exist today – the human visual system.
Not surprisingly, Convolutional Neural Networks or CNNs were not the first class of models conceived to emulate the architecture of our visual system. Various such neurocognition models exist in the literature today. However, it’s one thing to take inspiration from something, another thing to actually get it to work.
CNN was not the first model mimicking the human visual system, but it was the first model that came the closest to human-level performance, and in fact, as of this writing, has also beat human benchmarks in some vision tasks already.
The reason for their ridiculously widespread success, apart from being biologically inspired, can be attributed to the fact that among all its neurocognition predecessor models, CNNs produced activity that directly corresponded to the activity of different areas in the human visual system. This finding has been further strengthened with the introduction of Deep CNNs, where later layers in the network are noted to be in correspondence to later areas in the ventral visual stream.
A bit of medical jargon there, but the point being made, is that CNNs are both, architecturally and functionally similar to our own visual system.
Read More On Convolutional Neural Networks:
The Fundamentals
The traditional way to deal with image data (2D) before CNNs really came to the scene, was by flattening images to one long sequence (1D) and passing it to a Deep Neural Network or DNN. This worked quite well for small binary images (images without color information) but the inherent inefficiency in this approach presented itself when it was exposed to more real-world images with color depth (RGB channels) as shown in Fig. 1 and Fig. 2.
When the dimensions of the image are of a toy case, 28 x 28, the flattened vector (1D) is of length 784. That’s pretty doable!
However, when we take practical problem images, say, like the images captured by a car in an autonomous driving application, the length of the flattened vector almost touches three million!
Not only this, but the flattened (1D) vector does not retain any of the spatial or color information of the original image as shown clearly in Fig. 2!
Clearly, this approach is not scalable!
So the concept is this, how about we just let image data be a 2D sequence instead of all the fancy reshaping?
Drumrolls! Enter Convolutional Neural Networks!
Convolutional Neural Networks accept an image as a 2D sequence (that is, they accept an image as it is), and are thus capable of exploiting an image’s spatial structure and its color information, making it possible for these networks to extract deeper semantic features than an ordinary Deep Neural Network ever would.
The Power of Convolutions
A Convolutional Neural Network works on the principle of ‘convolutions’ borrowed from classic image processing theory.
Let us take a simple, yet powerful example to understand the power of convolutions better.
Imagine if you were tasked with ‘coaching’ a neural network to differentiate between the digits, ‘1’ and ‘2’. Also, you are given the ability to ‘talk’ to a neural network to guide it in this process! What would you advise that the network do?
Well, you would probably tell that the network that the digit ‘1’ has a characteristic feature – a vertical straight line; but the digit ‘2’ doesn’t! Elegant!
However, the problem is, how would your network identify a vertical straight line?
The answer is through convolutions!
Fig. 3 shows how using convolutions with the right filter, a vertical edge in an image can be correctly identified. The convolution operation is an element-wise dot product, followed by summation as illustrated in Fig. 4 and Fig. 5 for both the digits, ‘1’ and ‘2’.
Of course, in this example, you smartly chose a filter for the network which would detect vertical edges in the image. But you aren’t going to be around forever. This means, your network needs to learn the filters that are optimal for solving a given task by itself! And the good news is, they do – through backpropagation.
In a nutshell, Convolutional Neural Networks start off by randomly initializing filters (like the one you chose, but many more) and in the process of training (backpropagation), they keep modifying the initialized filters such that the given task can be accomplished.
For this example task, you’d expect your CNN to ‘learn’ the filter that detects a vertical straight line through backpropagation and become really good at classifying the two digits apart from each other!
This toy example also helps us see how and why CNNs mimic our visual system which as we mentioned, is the reason for their stellar performances time and again!
In a real-world task though, your network would need many more filters than just one to actually get good at the given task, as we’ll shortly see when we code up our own CNN in PyTorch to classify fashion accessories!
We deliberately digressed from the code until now, because we wanted to introduce this topic to you with a little bit of background. AI has become such a fast-paced field that often learners tend to blindly jump straight to bits of ‘library’ code that does all the work for you, without first taking time with the fundamentals.
Remember, in AI, understanding the philosophy, background, and math of a particular architecture or technique, is as important as learning its execution in terms of code for long time success!
The PyTorch Flow.
Learners that have landed here from our previous two blogs, will be familiar to what we call the ‘PyTorch flow’ presented to you in Fig. 6. If you are a new learner, we highly recommend you go through the previous two blogs in this series and then come back here for the best possible learning experience!
Fig. 6 illustrates the steps we’ll follow when building our own CNN for classifying fashion accessories.
More from the Series:
- PyTorch: The Dark Horse of Deep Learning Frameworks (Part 1)
- The Next Step: Building Neural Networks with PyTorch (Part 2)
Oh wait, did we introduce you to the data yet? (Rasul and Xiao).
Fig. 7 below is a small snapshot of the dataset with each class taking three rows.
Further, we have created a Table. 1 to clearly list all the 10 classes along with the labels.
Table 1. Classes and Labels of the Fashion MNIST
PyTorch’s torchvision library makes loading Fashion MNIST (along with other popular datasets) and pre-trained models like AlexNet and VGG seamless.
Let’s go ahead and import this module first, and then look at the code to load Fashion MNIST.
Also Read: Build your own handwritten digit recognition system using MNIST dataset in Python!
First, of course, we need to install the torchvision library on our machine. This can be done with the following command:
(Using pip)
pip install torchvision
(Using conda, if have Anaconda on your machine and prefer it)
conda install -c pytorch torchvision
(Using an IDE or an IPython interface, such as Jupyter Notebooks)
!pip install torchvision
Although we highly recommend that you don’t try to run this on your local machine and instead, opt for a cloud-hosted runtime like Google Colaboratory Notebooks!
There are several reasons for you to make the switch to Colab:
- A cloud-hosted runtime lets you forget about heating up your machines during neural network training and avoid ‘library’ clutter!
- The power of GPU is available to you for accelerating training.
- Great for rapid experimentation and learning new things (like coding up CNNs in PyTorch!)
- It has popular libraries including torchvision preinstalled!
- Naturally comes with the full power of Jupyter Notebooks which lets you add text cells, images, and more along with code!
And the absolute finisher:
- It is completely free of cost!
To facilitate your switch, we have gone ahead and curated a fully functional Colab Notebook for this tutorial which you can follow alongside, or check out later!
We’ll put the link here below:
Note how we don’t have to install any of the libraries in the notebook!
Torchvision installed, we are now ready to import it with a simple import as shown:
import torchvision import torchvision.transforms as transforms import torch from torch.utils.data import DataLoader
You’ll note that along with torchvision, we’ve also imported several other modules. It will make perfect sense to you in a bit. But first, let us see the code for downloading Fashion MNIST and loading it, which is literally two lines!
# Load the Train and Test sets train = torchvision.datasets.FashionMNIST(root='FashionMNIST/', train=True, download=True, transform = transforms.ToTensor()) test = torchvision.datasets.FashionMNIST(root='FashionMNIST/', train=False, download=True, transform = transforms.ToTensor())
Here’s the thing, Fashion MNIST stores the train and test images in the PIL format by default. But we know that if we want to be able to do any computation on those images, we need to convert it to a torch tensor which is exactly why the
transform = transforms.ToTensor() argument comes into the picture!
Once the train and test sets are loaded, it always helps to visualize how the variables train and test actually hold the data as shown in Fig. 8. In our notebook, we have also provided a small code snippet that will strengthen your understanding of how data is held in the train and test variables. Do check it out!
We can also print off the lengths of train and test to confirm that all is as per expectation:
# Confirm that we have the Train and Test sets as expected print('Length of training set: {}'.format(len(train))) print('Length of test set: {}'.format(len(test))) Output: Length of training set: 60000 Length of test set: 10000
What we’ll do next, is that we’ll create batches of 32 within the train and test data along with their labels. This is where the imported
DataLoader class from
torch.utils.data can help us!
The below code snippet uses the DataLoader class that takes our train and test variables and returns a batched version of the images held in it, in sizes of 32.
We store this batched version of our train and test data in two new variables,
train_data_loader and
test_data_loader
# Create batches of size 32 within the train and test datasets train_data_loader = DataLoader(train, batch_size=32, shuffle=True) test_data_loader = DataLoader(test, batch_size=32, shuffle=False)
We show this batching process with the help of a conceptual visualization in Fig. 9.
Now each datapoint in the
train_data_loader and
test_data_loader is a batch of 32 images along with their labels.
A quick sanity check tells us this is indeed the case since 60000/32 = 1875 matches with the length returned by
calling
len()on train_data_loader!
# Sanity check to verify that the data has been batched in sizes of 32 len(train_data_loader) Output: 1875
(Note that for the test set, 10000/32 is 312.5 which means only 16 images can be put in the last batch, as shown in Fig. 9. See if you can verify this in code in our notebook!)
Let’s quickly see some examples of the data we have!
import numpy as np import matplotlib.pyplot as plt # Get some random training images data_iter = iter(train_data_loader) images, labels = data_iter.next() # Function to show some images from the dataset def imshow(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # Show images imshow(torchvision.utils.make_grid(images)) # Print labels classes = ('T-Shirt/Top', 'Trouser', 'Pullover', 'Dress','Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot') print(' '.join('%5s' % classes[labels[j]] for j in range(4))) Output:
Model Architecture and Forward Pass
The first two steps according to Fig. 6 are defining model architecture with
__init__() and the forward pass with
forward()
import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): # The __init__() function defines the architecture. def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3) self.conv2 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=3) self.pool1 = nn.MaxPool2d(2, 2) self.conv3 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5) self.conv4 = nn.Conv2d(in_channels=32, out_channels=32, kernel_size=5) self.pool2 = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(32 * 5 * 5, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 10) # The forward() function defines the forward pass. def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = self.pool1(x) x = F.relu(self.conv3(x)) x = F.relu(self.conv4(x)) x = self.pool2(x) x = x.view(-1, 32 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x OurFirstCNN = Net()
We have already covered all the syntactical elements of
__init__() and
forward() in our previous blog so we’ll just be going over the parts that are different.
1.
nn.Conv2d(): Defines a convolutional layer for us by taking in three key arguments,
in_channels, out_channels, and
kernel size.
in_channels: Here we define the number of channels the input image to this layer is going to have. In our example, for the first convolutional layer,
in_channels = 1, because all our images in the dataset are grayscale.
out_channels: Number of filters (or kernels) that you want to initialize for the given layer.
Note, technically this argument specifies the number of channels produced by the convolution, but that is always equal to the number of filters anyway, so we find our alternate definition a little easier to think of than the former.
kernel size: As straightforward as they come, this argument specifies the size of our filter (or kernel). Usually, a good size is between 3×3 to 7×7.
2.
nn.MaxPool2d(): This implements the simple ‘pooling’ operation, an example of which is shown in Fig. 10. As an argument, it takes the factor by which to reduce dimensionality along the axes of a convolved image. Essentially, it performs downsampling by preserving only the maximum activations.
A small point that needs a little discussion is the line:
x = x.view(-1, 32 * 5 * 5)
Recall that if we want to pass a 2D sequence to a fully connected layer (specified by
nn.Linear), we must flatten it to a 1D sequence. After passing through several convolutional layers, the image is essentially converted to what is called a ‘feature map’. This feature map is of considerably higher dimensionality than the original image. For our example, the image has 1 channel at the beginning but ends up being compressed into a ‘feature map’ having 32 channels!
At this point, enough features have been extracted by the filters that acted on the image through convolution, in all the layers that the image went through. And now that the features have been extracted, it would be fair to flatten it to a 1D sequence and leave the classification to a Deep Neural Network or DNN which is exactly what
x.view(-1, 32 * 5 * 5) does – a simple flattening operation.
Our Colab Notebook has some more code snippets that clearly demonstrate the behavior of the x.view()function. Be sure to check that out!
The architecture of our designed CNN is presented in Fig. 11.
We have also marked all the dimensions along with the filter and feature map sizes.
See if you can relate each block of the image to what we’ve coded in
__init__()!
The Training Loop
From our developed ‘PyTorch Flow’ in Fig. 6, the next steps are as follows:
- Performing the forward pass and calculate the loss.
- Invoke Autograd to backpropagate.
- Use the optimizer to update the gradients.
If we repeat these three steps for all the images in a loop, what we’ll have, is the training loop! Pretty easy! Let’s jump to the code!
Or… maybe not.
To perform any of those steps, we would first need to define a loss function and an optimizer as follows:
import torch.optim as optim # Define the loss function and optimizer. # We define the crossentropy loss here because we have 10 classes # Adam as optimizer with its default learning rate should work well loss_fn = nn.CrossEntropyLoss() optimizer = optim.Adam(OurFirstCNN.parameters(), lr=0.001) # Define a function for calculating accuracy def accuracy(): correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = OurFirstCNN(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
We’ve also defined a function for calculating accuracy which will help us evaluate the performance of our model during training and after.
Okay, yes now we good.
Now let’s show you the code!
# The Training Loop for epoch in range(10): # loop over the dataset multiple times for i, data in enumerate(train_data_loader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() # print statistics (loss and accuracy) for each mini-batch print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, loss.item())) print('Finished Training')
Here’s what’s happening in the training loop:
1. For each iteration over the training dataset:
- For each minibatch (32 images with their labels as stored in
train_data_loader):
- Load the minibatch. Images in
inputsand the
labelsin labels.
- Calculate the loss by passing the minibatch to the network
- Backpropagate the error by
loss.backward()
- Update the gradients by
optimizer.step()
- Print the epoch number, current minibatch number, loss along with the test accuracy by calling the
accuracy()function defined before!
In TensorFlow (Keras), you probably wouldn’t have to define a separate function for accuracy like we did with PyTorch. But part of the reason that makes PyTorch so popular, is how it is super flexible without being complicated!
What do we mean by that?
Well, say you want to check, at each epoch, how well the network is performing with respect to all the different classes and not just how it is doing overall!
You can do this by simply switching out the accuracy function we had defined with this definition and call it in the training loop:
def accuracy(): class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testload Let’s also show you er: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i]))
Doing this is not at all trivial in TensorFlow since we cannot use native Python-like we did with PyTorch here. We’ll see more of this in the next blog of this series where we give you learners some final thoughts comparing both these deep learning libraries!
In fact, in our Colab Notebook, we have implemented the same network with detailed comments in TensorFlow so you can compare each step side by side!
Hey! But How Did Our Network Perform?
Well, pretty awesome actually.
Take a look at the loss and accuracy graphs below in Fig. 12 and Fig. 13!
In just 10 epochs, we have achieved over 91% test accuracy! That’s super cool!
Let’s also show you how the console looks like when training, and also when we seamlessly switch out normal accuracy with a per-class accuracy! Check out Fig. 14 and 15!
Note from Fig. 15 that initially the per-class accuracy is 0% for almost all classes and it improves as the training progresses! Also, make a mental note of how insightful monitoring per-class accuracies can be!
Accelerating Training with GPU! 🚀
You’ll notice right away that the network takes quite a while to train. You can accelerate it by leveraging Colab’s free GPU (also the reason why we recommended Colab in the first place)!
To be able to use a GPU with PyTorch though, you need to ‘send’ your tensors ‘to the GPU’.
We’ve covered the syntax for how to do this in our first introductory blog on PyTorch. We’ve got you covered here, but if you haven’t read that blog, we suggest you head over there right away!
So all you really need to do is this:
1. Add these lines after the model definition, that is, after
__init__() and
forward():
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Assuming that we are on a CUDA machine, this should print a CUDA device: print(device) OurFirstCNN.to(device)
This selects the GPU so that PyTorch knows where to send your tensors.
2. Switch out the existing line in the training loop with:
# In the training loop for epoch in range(10): # loop over the dataset multiple times for i, data in enumerate(train_data_loader, 0): # Get the inputs inputs, labels = data[0].to(device), data[1].to(device)
This ‘sends’ your tensors ‘to the GPU’ we selected in the previous step. Note the
.to(device) syntax!
3. Lastly, perform the same switch in our accuracy functions:
# In the accuracy functions with torch.no_grad(): for data in test_data_loader: images, labels = data[0].to(device), data[1].to(device)
Done! Your network will now train like a 🚀
Note that GPU training is the default behavior in our Colab Notebook! But we have provided options for you to do a CPU train too, just so that you can see the difference for yourself!
Your Learning Outcomes
- First, you learned the fundamentals of where CNN gets its superpowers from!
- Next, you explored the Fashion MNIST dataset and PyTorch’s DataLoader feature!
- Subsequently, we followed our ‘PyTorch Flow’ developed in earlier blogs to build a CNN step by step. We also touched on how flexible PyTorch can be when we seamlessly switched our accuracy functions to reveal more about our training!
- You learned how to accelerate network training by using Colab’s free GPU with PyTorch!
- In our Colab Notebook, you saw a step by step comparison of going about the same problem in PyTorch and TensorFlow!
Learners, apart from being one of the top all-time skills, coding is a very intricate field. And since not all the programming subtleties can be covered in a blog without the risk of it running too long, we’ve added lots of other supporting content in our Colab Notebook that will truly strengthen your understanding of not only PyTorch and how it compares to TensorFlow, but also Python coding in general!
More from the Series:
- PyTorch: The Dark Horse of Deep Learning Frameworks (Part 1)
- The Next Step: Building Neural Networks with PyTorch (Part 2)
Read More: | https://blog.eduonix.com/artificial-intelligence/building-convolutional-neural-networks-pytorch/ | CC-MAIN-2020-45 | refinedweb | 3,904 | 55.44 |
MIDP Programming with J2ME
Low-Level API
In contrast to the high-level API, the low-level API allows full control of the MID display at pixel level. For this purpose, the lcdui package contains a special kind of screen called Canvas. The Canvas itself does not provide any drawing methods, but it does provide a paint() callback method similar to the paint() method in AWT components. Whenever the program manager determines that it is necessary to draw the content of the screen, the paint() callback method of Canvas is called. The only parameter of the paint() method is a Graphics object. In contrast to the lcdui high-level classes, there are many parallels to AWT in the low-level API.
The Graphics object provides all the methods required for actually drawing the content of the screen, such as drawLine() for drawing lines, fillRect() for drawing a filled rectangular area or drawstring() for drawing text strings.
The program manager knows that it must call the paint() method of Canvas when the instance of Canvas is shown on the screen. However, a repaint can also be triggered by the application at any time. By calling the repaint() method of Canvas, the system is notified that a repaint is necessary, and it will call the paint() method. The call of the paint() method is not performed immediately; it may be delayed until the control flow returns from the current event handling method. The system may also collect several repaint requests before paint() is actually called. This delay normally is not a problem, but when you're doing animation, the safest way to trigger repaints is to use Display.callSerially() or to request the repaint from a separate Thread or TimerTask. Alternatively, the application can force an immediate repaint by calling serviceRepaints(). (For more information, see the section "Animation" at the end of this chapter.)
The Canvas class also provides some input callback methods that are called when the user presses or releases a key or touches the screen with the stylus (if one is supported by the device).
Basic Drawing
Before we go into the details of user input or animation, we will start with a small drawing example showing the concrete usage of the Canvas and Graphics classes.
The example clears the screen by setting the color to white and filling a rectangle the size of the screen, determined by calling getWidth() and getHeight(). Then it draws a line from coordinates (0,0) to (100,200). Finally, it draws a rectangle starting at (20,30), 30 pixels wide and 20 pixels high:
import javax.microedition.lcdui.*; class DrawingDemoCanvas extends Canvas { public void paint (Graphics g) { g.setGrayScale (255); g.fillRect (0, 0, getWidth (), getHeight ()); g.setGrayScale (0); g.drawLine (0, 0, 100, 200); g.fillRect (20, 30, 30, 20); } }
As you can see in the example code, you create a custom class DrawingDemoCanvas in order to fill the paint() method. Actually, it is not possible to draw custom graphics without creating a new class and implementing the paint() method.
In order to really see your Canvas implementation running, you still need a corresponding MIDlet. Here's the missing code:
import javax.microedition.midlet.*; import javax.microedition.lcdui.*; public class DrawingDemo extends MIDlet { public void startApp () { Display.getDisplay (this).setCurrent (new DrawingDemoCanvas ()); } public void pauseApp () {} public void destroyApp (boolean forced) {} }
Now you can start your DrawingDemo MIDlet. Depending on the screen size of the device, it will create output similar to Figure 3.9. In most subsequent examples, you will omit the MIDlet since it is basically the same as this one, except that the name of your Canvas class will be different.
Figure 3.9 Output of the DrawingDemo MIDlet.
In the example, the screen is cleared before drawing because the system relies on the paint() method to fill every pixel of the draw region with a valid value. You don't erase the previous content of the screen automatically because doing so may cause flickering of animations. The application cannot make any assumptions about the content of the Screen before paint() is called. The screen may be filled with the content drawn at the last call of paint(), but it may also be filled with an alert box remaining from an incoming phone call, for example.
Drawing Style and Color
In the DrawingDemoCanvas implementation, you can find two calls to setGrayScale(). The setGrayScale() method sets the gray scale value for the following drawing operations. Valid grayscale values range from 0 to 255, where 0 means black and 255 means white. Not all possible values may actually render to different gray values on the screen. If the device provides fewer than 256 shades of gray, the best fitting value supported by the device is chosen. In the example, the value is first set to white, and the screen is cleared by the following call to drawRect(). Then, the color is set to black for the subsequent drawing operations.
The setGrayScale() method is not the only way to influence the color of subsequent drawing. MIDP also provides a setColor() method. The setColor() method has three parameters holding the red, green, and blue components of the desired color. Again, the values range from 0 to 255, where 255 means brightest and 0 means darkest. If all three parameters are set to the same value, the call is equivalent to a corresponding call of setGrayScale(). If the device is not able to display the desired color, it chooses the best fitting color or grayscale supported by the device automatically. Some examples are listed in Table 3.7.
Table 3.7 Example Color Parameter Settings
The only other method that influences the current style of drawing is the setStrokeStyle() method. The setStrokeStyle() command sets the drawing style of lines to dotted or solid. You determine the style by setting the parameter to one of the constants DOTTED or SOLID, defined in the Graphics class.
When the paint() method is entered, the initial drawing color is always set to black and the line style is SOLID.
Simple Drawing Methods
In the example, you have already seen fillRect() and drawLine(). Table 3.8 shows all drawing primitives contained in the Graphics class. All operations where the method names begin with draw, except drawstring() and drawImage(), are influenced by the current color and line style. They draw the outline of a figure, whereas the fill methods fill the corresponding area with the current color and do not depend on the line style.
Table 3.8 Drawing Methods of the Graphics Class
Coordinate System and Clipping
In the drawing example, we already have used screen coordinates without explaining what they actually mean. You might know that the device display consists of little picture elements (pixels). Each of these pixels is addressed by its position on the screen, measured from the upper-left corner of the device, which is the origin of the coordinate system. Figure 3.10 shows the lcdui coordinate system.
Actually, in Java the coordinates do not address the pixel itself, but the space between two pixels, where the "drawing pen" hangs to the lower right. For drawing lines, this does not make any difference, but for rectangles and filled rectangles this results in a difference of one pixel in width and height: In contrast to filled rectangles, rectangles become one pixel wider and higher than you might expect. While this may be confusing at first glance, it respects the mathematical notation that lines are infinitely thin and avoids problems when extending the coordinate system to real distance measures, as in the J2SE class Graphics2D.
Figure 3.10 The lcdui coordinate system.
In all drawing methods, the first coordinate (x) denotes the horizontal distance from the origin and the second coordinate (y) denotes the vertical distance. Positive coordinates mean a movement down and to the right. Many drawing methods require additional width and height parameters. An exception is the drawLine() method, which requires the absolute coordinates of the destination point.
The origin of the coordinate system can be changed using the translate() method. The given coordinates are added to all subsequent drawing operations automatically. This may make sense if addressing coordinates relative to the middle of the display is more convenient for some applications, as shown in the section "Scaling and Fitting," later in the chapter.
The actual size of the accessible display area can be queried using the getWidth() and getHeight() methods, as performed in the first example that cleared the screen before drawing. The region of the screen where drawing takes effect can be further limited to a rectangular area by the clipRect() method. Drawing outside the clip area will have no effect.
The following example demonstrates the effects of the clipRect() method. First, a dotted line is drawn diagonally over the display. Then a clipping region is set. Finally, the same line as before is drawn using the SOLID style:
import javax.microedition.lcdui.*; class ClipDemoCanvas extends Canvas { public void paint (Graphics g) { g.setGrayScale (255); g.fillRect (0, 0, getWidth (), getHeight ()); int m = Math.min (getWidth (), getHeight ()); g.setGrayScale (0); g.setStrokeStyle (Graphics.DOTTED); g.drawLine (0, 0, m, m); g.setClip (m / 4, m / 4, m / 2, m / 2); g.setStrokeStyle (Graphics.SOLID); g.drawLine (0, 0, m, m); } }
Figure 3.11 shows the resulting image. Although both lines have identical start and end points, only the part covered by the clipping area is replaced by a solid line.
Figure 3.11 Output of the clipRect() example: Only the part covered by the clipping area is redrawn solid, although the line coordinates are identical.
When the paint() method is called from the system, a clip area may already be set. This may be the case if the application just requested repainting of a limited area using the parameterized repaint call, or if the device just invalidated a limited area of the display, for example if a pop-up dialog indicating an incoming call was displayed but did not cover the whole display area.
Actually, clipRect() does not set a new clipping area, but instead shrinks the current clip area to the intersection with the given rectangle. In order to enlarge the clip area, use the setClip() method.
The current clip area can be queried using the getClipX(), getClipY(), getClipWidth(), and getClipHeight() methods. When drawing is computationally expensive, this information can be taken into account in order to redraw only the areas of the screen that need an update.
Text and Fonts
For drawing text, lcdui provides the method drawstring(). In addition to the basic drawstring() method, several variants let you draw partial strings or single characters. (Details about the additional methods can be found in the lcdui API documentation.) The simple drawstring() method takes four parameters: The character string to be displayed, the x and y coordinates, and an integer determining the horizontal and vertical alignment of the text. The alignment parameter lets you position the text relative to any of the four corners of its invisible surrounding box. Additionally, the text can be aligned to the text baseline and the horizontal center. The sum or logical or (|) of a constant for horizontal alignment (LEFT, RIGHT, and HCENTER) and constants for vertical alignment (TOP, BOTTOM, and BASELINE) determine the actual alignment. Figure 3.12 shows the anchor points for the valid constant combinations.
Figure 3.12 Valid combinations of the alignment constants and the corresponding anchor points.
The following example illustrates the usage of the drawstring() method. By choosing the anchor point correspondingly, the text is displayed relative to the upper-left and lower-right corner of the screen without overlapping the screen border:
import javax.microedition.lcdui.*; class TextDemoCanvas extends Canvas { public void paint (Graphics g) { g.setGrayScale (255); g.fillRect (0, 0, getWidth (), getHeight ()); g.setGrayScale (0); g.drawString ("Top/Left", 0, 0, Graphics.TOP | Graphics.LEFT); g.drawString ("Baseline/Center", getWidth () / 2, getHeight () / 2, Graphics.HCENTER | Graphics.BASELINE); g.drawString ("Bottom/Right", getWidth (), getHeight (), Graphics.BOTTOM | Graphics.RIGHT); } }
Figure 3.13 shows the output of the TextDemo example.
Figure 3.13 Output of the TextDemo example.
In addition to the current drawing color, the result of the drawstring() method is influenced by the current font. MIDP provides support for three different fonts in three different sizes and with the three different attributes: bold, italic, and underlined.
A font is not selected directly, but the setFont() method takes a separate Font object, describing the desired font, as a parameter. The explicit Font class provides additional information about the font, such as its width and height in pixels, baseline position, ascent and descent, and so on. Figure 3.14 illustrates the meaning of the corresponding values. This information is important for operations such as drawing boxes around text strings. In addition, word-wrapping algorithms rely on the actual pixel width of character strings when rendered to the screen.
Figure 3.14 Font properties and the corresponding query methods.
A Font object is created by calling the static method createFont() of the class Font in the lcdui package. The createFont() method takes three parameters: the font type, style, and size of the font. Similar to the text alignment, there are predefined constants for setting the corresponding value; these constants are listed in Table 3.9.
Table 3.9 createFont() Property Constants
The style constants can be combinedfor example, STYLE_ITALICS | STYLE_BOLD will result in a bold italics font style.
The following example shows a list of all fonts available, as far as the list fits on the screen of the device:
import javax.microedition.lcdui.*; class FontDemoCanvas extends Canvas { static final int [] styles = {Font.STYLE_PLAIN, Font.STYLE_BOLD, Font.STYLE_ITALIC}; static final int [] sizes = {Font.SIZE_SMALL, Font.SIZE_MEDIUM, Font.SIZE_LARGE}; static final int [] faces = {Font.FACE_SYSTEM, Font.FACE_MONOSPACE, Font.FACE_PROPORTIONAL}; public void paint (Graphics g) { Font font = null; int y = 0; g.setGrayScale (255); g.fillRect (0, 0, getWidth (), getHeight ()); g.setGrayScale (0); for (int size = 0; size < sizes.length; size++) { for (int face = 0; face < faces.length; face++) { int x = 0; for (int style = 0; style < styles.length; style++) { font = Font.getFont (faces [face], styles [style], sizes [size]); g.setFont (font); g.drawString ("Test", x+1, y+1, Graphics.TOP | Graphics.LEFT); g.drawRect (x, y, font.stringWidth ("Test")+1, font.getHeight () + 1); x += font.stringWidth ("Test")+1; } y += font.getHeight () + 1; } } } }
Figure 3.15 shows the output of the FontDemo example.
Figure 3.15 Output of the FontDemo example.
Page 5 of<< | https://www.developer.com/java/j2me/article.php/10934_1561591_5/MIDP-Programming-with-J2ME.htm | CC-MAIN-2019-04 | refinedweb | 2,410 | 56.45 |
Carousel wont appear
I am new to OnsenUI. I have used the example react-onsenui-redux-weather as a starting point. OnsenUI has been behaiving until I tried to use a Carousel. I basically overwrote the “content” variable with the code below in WeatherPage.js. I had to comment out a few lines because no “this” in a functional component. I get a blank screen and no error. Any ideas?
content = <Carousel
//index={this.state.index}
onPostChange={() => console.log(‘onPostChange’)}
onOverscroll={() => console.log(‘onOverscroll’)}
onRefresh={() => console.log(‘onRefresh’)}
//ref={(carousel) => { this.carousel = carousel; }}
swipeable
overscrollable
autoScroll
fullscreen
autoScrollRatio={0.2
}
<CarouselItem key={"1"} style={{ backgroundColor: 'gray' }}> <div className='item-label'>GRAY</div> <p>YO</p> </CarouselItem> <CarouselItem key={"2"} style={{ backgroundColor: '#085078' }}> <div className='item-label'>BLUE</div> </CarouselItem>
</Carousel>
I should add that
as I add Carouseltems the scrollbar on the right gets “deeper”.
I see in the debugger that element ons-carousel-item has CSS visibility: hidden; If I remove this then my content appears however the carousel-items are stacked on top of each other (see 1))
I should add that the 2 points above were with no fullscreen attribute. if I add fullscreen then the scrollbar disappears but still no carousel
looks like I have solved it. looks like my node_modules onsui package was screwed up. The directory still had 2.0.4 stylus based css in it. All off a sudden I was getting on build
failed to locate @import file onsenui/stylus/components.styl
So instead trying to get 2.0.4 working again, I tried to get 2.10.5 working. The fix was to replace (in index.js)
import ‘./stylus/index.styl’;
with
import 'onsenui/css/onsenui.css’
import ‘onsenui/css/onsen-css-components.css’
now I am seeing my Carousel | https://community.onsen.io/topic/3362/carousel-wont-appear | CC-MAIN-2018-43 | refinedweb | 299 | 61.43 |
Bind React class component methods to proper `this` without calling bind() in constructor
One of the common pitfalls when learning React is related to event handlers losing the reference to
this pointing to the component when called. Common approach to solving this issue is to use a
constructor and
.bind() function to bind all class methods to
this object, referring to the component itself.
It sometimes leads to a
constructor which sole purpose is to bind a bunch of methods.
In this lesson we are going to learn how to use a public class field syntax (currently available in Babel stage-2 and enabled by default in create-react-app) to avoid having to create unnecessary constructors in order to bind a React component method to proper
this | https://egghead.io/lessons/react-bind-react-class-component-methods-to-proper-this-without-calling-bind-in-constructor?utm_source=rss&utm_medium=feed&utm_campaign=rss_feed | CC-MAIN-2019-09 | refinedweb | 128 | 54.56 |
0
I wrote an Ansi C# code that finds the smallest number in a several unknown function calls.
You can see the code, it is very clear. just need help with how to make the program to not ignore always the first number in the list of each argument list... and another, how to handle with call like:
lowest_ever ( -1);
The range of numbers is between 0 to +100 ONLY ! and every call to function must end with -1 (those are the roles....).
My tryout:
#include <stdarg.h> #include <stdio.h> int lowest_ever (int frst,...) { va_list mylist; static int lowest_num=101; static int next_num; va_start (mylist, frst); /*Initialize the argument list*/ next_num= va_arg(mylist, int); while (next_num!=-1) { if (next_num <lowest_num) lowest_num= next_num; next_num = va_arg(mylist, int); } va_end (mylist); /*Clean up */ return lowest_num; } int main (void) { /*This call prints 5*/ printf ("%d\n", lowest_ever (5, 78, 90, 20, -1)); /*This call prints 2*/ printf ("%d\n", lowest_ever (70, 40, 2, -1)); /*This call prints 2*/ printf ("%d\n", lowest_ever (40, 30, -1)); return 0; }
Thanks !! | https://www.daniweb.com/programming/software-development/threads/254453/find-lowest-number-in-several-unknown-arguments | CC-MAIN-2017-39 | refinedweb | 175 | 71.55 |
Recently I was working with some of my team on an Ember component which needed to react to JavaScript events they expressed some confusion about the difference between JavaScript events and Ember's Action system. I decided to write up the basics here.
Blowing bubbles
One of the fundamental behaviours of JavaScript DOM events is bubbling. Let's focus on a
click event, although the type of event is arbitrary. Suppose we have an HTML page composed like this:
<html> <body> <main> <p>Is TimeCop a better time travel movie than Back To The Future?</p> <button>Yes</button> <button>No</button> <button>Tough Call</button> </main> </body> </html>
Supposing I load this page in my browser and I click on the "Tough Call" button (one of three correct answers on this page) then the browser walks down the DOM to find the element under the mouse pointer. It looks at the root element, checks if the coordinates of the click event are within that element's area, if so it iterates the element's children repeating the test until it finds an element that contains the event coordinates and has no children. In our case it's the last
button element on the screen.
Once the browser has identified the element being clicked it then checks to see if it has any click event listeners. These can be added by using the
onclick HTML attribute (discouraged), setting the
onclick property of the element object (also discouraged) or by using the element's
addEventListener method. If there are event handlers present on the element they are called, one by one, until one of the handlers tells the event to stop propagating, the event is cancelled or we run out of event handlers. The browser then moves on to the element's parent and repeats the process until either the event is cancelled or we run out of parent elements.
Getting a handle on it
Event handlers are simple javascript functions which accept a single Event argument (except for
onerror which gets additional arguments). MDN's Event Handlers Documentation is very thorough, you should read it.
There are some tricky factors involving the return value of the function; the rule of thumb is that if you want to cancel the event return
true otherwise return nothing at all. The
beforeunload and
error handlers are the exception to this rule.
A little less conversation
Ember actions are similar in concept to events, and are triggered by events (
click by default) but they propagate in a different way. The first rule of Ember is "data down, actions up". What this means is that data comes "down" from the routes (via their
model hooks) through the controller and into the view. The view emits actions which bubble back "up" through the controller to the routes.
Let's look at a simple example. First the router:
import Router from '@ember/routing/router'; Router.map(function() { this.route('quiz', { path: '/quiz/:slug'}) }); export default Router;
Now our quiz route:
import Route from '@ember/routing/route'; export default Route.extend({ model({ slug }) { return fetch(`/api/quizzes/${slug}`) .then(response => response.json()); } });
Now our quiz template:
<p>{{model.question}}</p> {{#each model.answers as |answer|}} <button {{action 'selectAnswer' answer}}>{{answer}}</button> {{/each}}
A quick aside about routing
When we load our quiz page Ember first enters the
application route and calls it's
model hook. Since we haven't defined an application route in our app Ember generates a default one for us which returns nothing from it's model hook. Presuming we entered the
/quiz/time-travel-movies URI the router will then enter the
quiz route and call the model hook which we presume returns a JSON representation of our quiz. This means that both the
application and the
quiz route are "active" at the same time. This is a pretty powerful feature of Ember, especially once routes start being deeply nested.
More bubble blowing
When an action is fired Ember bubbles it up the chain; first to the quiz controller, then to the
quiz route and then to the parent route and so on until it either finds an action handler or it reaches the application route. This bubbling behaviour is pretty cool because it means we can handle common actions near the top of the route tree (log in or out actions for example) and more specific ones in the places they're needed.
Notably Ember will throw an error if you don't have a handler for an action, so in our example above it will explode because we don't handle our
selectAnswer in the controller or the route.
The lonesome component
Ember's "data down, actions up" motto breaks down at the component level. Ember components are supposed to be atomic units of UI state which don't leak side effects. This means that our options for emitting actions out of components are deliberately limited. Actions do behave exactly as you'd expect within a component, except that there's no bubbling behaviour. This means that actions that are specified within a component's template which do not have a corresponding definition in the component's javascript will cause Ember to throw an error.
The main way to allow components to emit actions is to use what ember calls "closure actions" to pass in your action as a callable function on a known property of your component, for example:
{{my-button onSelect=(action 'selectAnswer' answer) label=answer}}
import Component from '@ember/component'; import { resolve } from 'rsvp'; export default Component({ tagName: 'button', onSelect: resolve, actions: { selectAnswer(answer) { return this.onSelect(answer); } } });
This is particularly good because you can reuse the component in other places without having to modify it for new use cases. This idea is an adaptation of the dependency injection pattern.
The eventual component
There are three main ways components can respond to browser events. The simplest is to use the
action handlebars helper to respond to your specific event, for example:
<div {{action 'mouseDidEnter' on='mouseEnter'}} {{action 'mouseDidLeave' on='mouseLeave'}}> {{if mouseIsIn 'mouse in' 'mouse out'}} </div>
As you can see, this can be a bit unwieldy when responding to lots of different events. It also doesn't work great if you want your whole component to react to events, not just elements within it.
The second way to have your component respond to events is to define callbacks in your component. This is done by defining a method on the component with the name of the event you wish to handle. Bummer if you wanted to have a property named
click or
submit. There's two things you need to know about Component event handlers; their names are camelised (full list here) and the return types are normalised. Return
false if you want to cancel the event. Returning anything else has no effect.
import Component from '@ember/component'; export default Component({ mouseIsIn: false, mouseDidEnter(event) { this.set('mouseIsIn', true); return false; }, mouseDidLeave(event) { this.set('mouseIsIn', false); return false; } });
The third way is to use the
didInsertElement and
willDestroyElement component lifecycle callbacks to manually manage your events when the component is inserted and removed from the DOM.
export default Component({ mouseIsIn: false, didInsertElement() { this.onMouseEnter = () => { this.set('mouseIsIn', true); }; this.onMouseLeave = () => { this.set('mouseIsIn', false); }; this.element.addEventListener('mouseenter', this.onMouseEnter); this.element.addEventListener('mouseleave', this.onMouseLeave); }, willRemoveElement() { this.element.removeEventListener('mouseenter', this.onMouseEnter); this.element.removeEventListener('mouseleave', this.onMouseLeave); } });
Note that using either of the last two methods you can use
this.send(actionName, ...arguments) to trigger events on your component if you think that's cleaner.
Conclusion
As you can see, actions and events are similar but different. At the most basic level events are used to make changes to UI state and actions are used to make changes to application state. As usual that's not a hard and fast rule, so when asking yourself whether you should use events or actions, as with all other engineering questions, the correct answer is "it depends".
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/jimsy/events-vs-actions-in-emberjs-84o | CC-MAIN-2021-17 | refinedweb | 1,341 | 51.48 |
JAXP (Java API for XML Processing) is a Java interface to allow consistent parsing of XML documents, which is just what you want to do. It's easy to use and there's lots of info available on the web. Actually, there are loads of different libraries, so this is just one solution, but pretty standard. The code below will load & parse your file, and processing the document is pretty easy after that. Search Google for "JAXP parser" or similar and you'll get a lot of pretty relevant hits.
One point: Apache Xerces and XML4J are other libraries, but JAXP is the best of the lot, IMHO.
import org.w3c.dom.*;
import javax.xml.parsers.Document
import javax.xml.parsers.Document
public class LoadXMLDocument
{
public static void main(String[] args)
{
String myInputSource= "pythonStuff.xml";
try{
// First, you need a document factory
DocumentBuilderFactory factory = DocumentBuilderFactory.new
// Next, you need a parser
DocumentBuilder parser = factory.newDocumentBuilder
// Load file into document DOM
Document doc = parser.parse(myInputSource
} catch(Exception ex){
System.out.println(ex.getM
}
// OK, now we have our DOM
}
} | https://www.experts-exchange.com/questions/21923183/which-is-the-best-language-to-implement-a-parser-application.html | CC-MAIN-2016-50 | refinedweb | 179 | 50.73 |
I have always thought about the title of this blog entry, ever since I started learning Java. But, until recently, I had no idea about the same. I didn't knew that something magical existed that would perform this conversion. But now since I know a bit, I thought I better share it.
Java Decompiler
As you would have guessed by now, a Java Decompiler is a computer program capable of reversing the work done by a Compiler. In essence, it can convert back the Bytecode (the .class file) into the source code (the .java file).
There are many decompilers that exist today, but we will talk about the most widely used JD - Java Decompiler, which is available both as a stand-alone GUI program and as an Eclipse-plugin.
To install and use these tools are a breeze and would not take you more than a few minutes to get accustomed to it. Hence, I would not repeat the process that's already mentioned on their site.
One thing that we must note here is that the process of conversion might NOT result into 100% exact code, i.e. the generated Java file might not match the actual Java code character by character. However, most of the code would be replicated but things like variable & function names or some other minor details may differ.
Lets have a look at the JD-GUI, stand-alone tool, written in C++;making it pretty fast to execute(decompile) and display the result. Also, it is independent of the the Java Runtime Environment and thus no setup is required to install it.
Lets test the tool now.
Example Java Code:
public class test{
public static void main(String[] args){
System.out.println("Hello world");
}
}
Compile it: javac test.java
so that we have ByteCode (test.class file) with us now.
Decompile it using any one of the following ways:
- Execute the following on the command line: jdi-gui.exe test.class
- Select 'Open File' from the menu, browse to get to the test.class file and open it
- Drag and Drop the test.class file into the JD-GUI tool
All of the above situations result in generating the following Java Code, have a look:
import java.io.PrintStream;
public class test
{
public static void main(String[] paramArrayOfString)
{
System.out.println("Hello world");
}
}
Well just by seeing JD perform really well on this simple example, we cannot decide how efficient is this tool. But, still it is a tool that every Java Developer must be aware of. Because, in case you have accidently deleted your Java files and are left with only the .class files, this is the tool that can save your neck. :-)
4 comments:
Could not convert socket to TLS;
What's the cause of this error? I'm using NETBEANS 6.5, GLASSFISH V3 and JAVA MAIL API.
Any ideas?
Thanks for your help.
I have also found out some good java examples at like the one provided here. :)
It's very good. Till now i did't know about this.
It's very good. I didn't know about this till now. | http://techno-cratic.blogspot.in/2009/01/bytecode-back-to-java-source-code.html | CC-MAIN-2017-13 | refinedweb | 523 | 75.3 |
import "go.chromium.org/luci/common/runtime/tracer"
Package tracer implements code to generate Chrome-compatible traces.
Since there is no thread id concept in Go, pseudo process id and pseudo thread id are used. These are defined at application level relative to the application-specific context.
See for more information.
doc.go tracer.go tracer_posix.go
CounterAdd increments a value for a counter.
The values will be grouped inside the PID and each name displayed as a separate line.
CounterSet registers a new value for a counter.
The values will be grouped inside the PID and each name displayed as a separate line.
Discard forgets a context association created with NewPID.
If not called, contexts accumulates and form a memory leak.
Instant registers an intantaneous event that has no duration.
NewPID assigns a pseudo-process ID for this marker and TID 1.
Optionally assigns name to the 'process'. The main use is to create a logical group for events.
Span defines an event with a duration.
The caller MUST call the returned callback to 'close' the event. The callback doesn't need to be called from the same goroutine as the initial caller.
Start starts the trace.
There can be only one trace at a time. If a trace was already started, the current trace will not be affected and an error will be returned.
Initial context has pid 1 and tid 1. Stop() must be called on exit to generate a valid JSON trace file.
If stackDepth is non-zero, up to 'stackDepth' PC entries are kept for each log entry.
Tracing events before this call are ignored.
TODO(maruel): Implement stackDepth.
Stop stops the trace.
It is important to call it so the trace file is properly formatted. Tracing events after this call are ignored.
Args is user-defined arguments for an event. It can be anything as long as it is JSON serializable.
Scope is used with Instant event to determine the scope of the instantaneous event.
Possible scopes that can be passed to Instant.
Package tracer imports 8 packages (graph) and is imported by 8 packages. Updated 2020-01-18. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/common/runtime/tracer | CC-MAIN-2020-05 | refinedweb | 361 | 69.68 |
I've posted two proposals:
Advertising
Not much opinion. Though I fear that "(adapter)object" could lead to this syntax in Python itself, which would be horrid ;) I agree that adapter(object) is a bad direction. object*adapter looks fine to me, and it seems reasonable that only a specific set of adapters would be available in TAL expressions (i.e., adapters which provide ITALESAdapter).
Proposes a mechanism for easily using adapters in TALES expressions.
proposes a mechanism for qualifying names defined in TAL and used in TALES expressions.
I'm suspicious of namespaces, as they seem redundant. Namespaces in Python, after all, are expressed like any other attribute traversal (e.g., os.path.exists). The analog in TAL would be adapter/foo. This is how TAL works right now in practice, with a small number of namespaces like request, container, etc.
I see a few problems with the current situation:
1. There's no clear way to indicate that you want to use a name as a namespace, as opposed to a normal name. So there may be a conflict between the "adapter" you want to use as a namespace, and a template that someone writes that happens to use the variable adapter in an unrelated way. This is fine now, because there is a fairly fixed number of namespaces (six or seven, I think), and you just don't use those names -- as namespaces are added (especially on a per-metal-template basis) this conflict is more likely, and you may not know what names will cause conflicts in the future.
But I'm not sure how bad the name conflict really is. In my experience it's not too bad in Python code, and when it's a problem it's fairly easily resolved. Or maybe another one or two namespaces can be added which would sufficient, and we don't need to extend the number of namespaces indefinitely. Like an adapter namespace and a metal namespace (maybe you'd use things like metal/name_of_template.pt/variable_name). To some degree this could even be convention, instead of building it in explicitly.
2. Another issue might be the difficulty of creating a namespace for use with templates -- with the proposal all namespaces start out empty and ready to accept new values, but if you use normal variables they start out as undefined, and you'd have to assign them to {} or something.
(A little thought: if you had "def namespace(): return {}", then tal:define="adapter namespace" would work and reads fairly well)
3. Explicit namespaces support deeper, structured assignment (but only a *little* deeper). Does TAL currently allow tal:define="var/attr something"? I've never tried it. It should. Maybe the specific semantics of this assignment could be refined to resolve (2) -- e.g., if you get a LookupError during the second-to-last segment of the traversal, try to assign it to {}.
Anyway, whenever I look at a language with explicit namespaces (e.g., Ruby), it seems really pointless. I think they should be avoided, and that normal TAL path expressions can be refined instead.
It's also annoying that we'd have namespace['adapter'] in Python expressions. Namespaces might be a way to introduce a more accessible set of typical functions, like DTML's nl2br and other formatting functions -- these are currently too hard to get to. But these have to be used with Python syntax (at least currently), and doing namespace['formatters']['nl2br'](options['message']) is just bad. I don't much care for tal:define="nl2br formatters:nl2br" either, as it feels like boilerplate. I suppose "path('formatters:nl2br')(path('options/message')) is maybe a little better, but only a very little.
-- Ian Bicking / [EMAIL PROTECTED] /
_______________________________________________
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - ) | https://www.mail-archive.com/zope-dev@zope.org/msg16495.html | CC-MAIN-2016-44 | refinedweb | 637 | 62.98 |
I learned that to add those codes in .emacs can make Emacs saves
automatically all situations before quitting and start it next time, Emacs
can show the last situation and go on editing it.
(load
"desktop") (desktop-load-default)(desktop-read)(add-hook
'kill-emacs-hook '(lambda()(desktop-save "~/")))
but this codes makes a pro
When writing a function, my implementation very frequently looks like
this:
A crucial part is the logging. Every function that
fails should add a short descr
please help me with asp.net MVC 2 application.
I have
class:
public class Account{
[Required(....)] [RegularExpression("....")] public
string AccountCode{ get; set; } public string BankName{ get;
set; } }
And another one:
public
class BankPageModel{ public bool Account
I am a beginner with Django. In what situations are Django formsets
used? (In real applications.)
Please give some examples.
As far as I can tell, the only use for out parameters is
that a caller can obtain multiple return values from a single method
invocation. But we can also obtain multiple result values using
ref parameters instead!
out
ref
So are there other
situations where out parameters could prove useful and where
we couldn't use ref parameters instead?
Let's say I've programmed an application which connects to a server
using the Socket Class (TCP). If I encounter a SocketException while
reading or writing, then obviously I have to do go ahead and run a
disconnection routine to change the application's state to
Disconnected.
But what if I've started to Disconnect, and while
I'm cleaning up, a SocketException occurs?
The | http://bighow.org/tags/situations/1 | CC-MAIN-2017-04 | refinedweb | 263 | 55.84 |
How Wikipedia Works/Chapter 1
Contents
- 1 Chapter 1. What's in Wikipedia?
- 1.1 What Is an Article?
- 1.2 Audience and Level
- 1.3 Article and Content Inclusion Policies
- 1.4 Non-article Content
- 1.5 Summary and What to Read Next
Chapter 1. What's in Wikipedia?.)
What Is an Article?[edit] w:Wikipedia:Pools.
- The actual millionth article, created on March 1, 2006, was w:Jordanhill (railway station), [1]. The two millionth article was created on September 9, 2007. Amid some confusion, the article w, w:Introduction to entropy). See w.
Types of Articles[edit]
Are you wondering how Wikipedia found enough topics to fill two million articles? Here are some (but by no means all) of the types of content that are included:
- Traditional encyclopedia topics
You can find all the types of content that you might expect from a general encyclopedia such as Encyclopaedia Britannica. Articles about science, historical events, geography, the arts, and literature are all included.
- People
No occupations or groups are restricted or emphasized, although in order to qualify for an article, the person must be notable, that is, well known within his or her major field of endeavor. Once this criterion is met, you may write an article about anyone: artists, musicians, scientists, historical figures, authors, athletes, politicians, monarchs, and on and on. (People are discouraged from writing about themselves, however.) The Wikipedia biography project (Wikipedia:WikiProject Biography) keeps track of biographical articles; by the end of 2007, there were nearly 400,000 articles listed as biographies, or nearly 20 percent of Wikipedia (see.”).
- Places
There are articles not just on countries, provinces, and major geographical features but also about cities and towns worldwide. For instance, there is an article about every city or hamlet in the United States (approximately 40,000 are recognized by the US Census Bureau).
Rambot: Most of the 40,000 articles about American towns were not created by hand; instead, they were created automatically with freely available census data. (The automated user account that created the pages is affectionately called Rambot.) For some time after Rambot made its initial efforts in 2002 and 2003, some community members complained that these census-based articles made up too much of the total article count. Now, however, it's not an issue because local residents and others have improved nearly all of the bot's articles, and the increase in other content means these articles now comprise only about 2 percent of the site.
There is still plenty to do in these conventional topic areas, but they don't crowd out other topics. Wikipedia includes many nontraditional subjects as well, including the following:
- Fictional characters
Want to read up on the personal history of Frodo or Darth Vader? While articles about real people are certainly included on Wikipedia, articles about well-known fictional characters are included as well.
- Media—movies, books, albums, songs, television shows (and their episodes), videogames, and more
Work in almost any medium can be considered for its own article.
- Companies and organizations
There are factual articles about most well-known corporations. The field of technology is covered particularly well. For example, the articles about Microsoft and Apple, Inc., are both comprehensive; these two articles reference roughly 100 outside sources apiece. Companies can be included in Wikipedia if there is enough reliable information and independent reporting available to support a useful article (simple existence of the company is not enough to qualify, and promotional material is not welcome). As with biographies, writing about your own organization or company is discouraged..
- Computer software and hardware
Considering the way Wikipedia is authored, you might expect a few articles about computers, and you'd be right—there are thousands of articles about programming languages, software, hardware, and computer science theory.
- Transport
Wikipedia has been a hit with transportation enthusiasts. There are thousands of articles about railway stations, canals, airports, and other minutiae of transport networks. For instance, the article I-35W Mississippi River bridge, about the interstate highway bridge in Minnesota that collapsed on August 1, 2007, was created well over a year before that event.
- Current events
Though the site does not support original reporting, Wikipedia is updated rapidly when major stories break. Current events coverage has had a major profile ever since the up-to-the-minute coverage of the 2004 Indian Ocean earthquake and related tsunami (this article alone had well over 1,000 edits in its first 48 hours). Finding out more about current events on the site is described in Chapter 3, Finding Wikipedia's Content.
Some pages are primarily navigational. These pages exist to point the way toward other Wikipedia pages. Three types of navigational pages are well worth noting:
- Lists
Linked lists are a defining feature of Wikipedia. Want to find a list of songs about Elvis Presley? No problem—it's at List of songs about or referencing Elvis Presley. Lists can be about nearly about any topic; though like any content, they should ideally be referenced. In fact, List of female tennis players was one of the earliest pages created on Wikipedia. Lists are browsable; start from List of topics to find lists of … well, nearly anything. (See Chapter 3, Finding Wikipedia's Content for some of our favorites.)
- Disambiguation pages
These pages include a whole list of links to possible articles that have similar names. For example, the Wikipedia page Orange links to articles on the color orange, the fruit, the Orange Bowl, the Dutch royal house of Orange, and numerous other pages (see Figure 1.3). Because it is not possible to anticipate which meaning you may be searching for when different topics share a name, these disambiguation pages pull together all the possible options. These pages are especially useful for biographical names: If in the course of some research, you come across a surname only, try the Wikipedia page for that name. It may quickly offer you a range of individuals to choose from.
Figure 1.3. The disambiguation page Orange
- Redirects
These pages simply push you from one page title to another automatically. You won't actually see these pages directly, but they are used extensively for alternate spellings, variations on names, and any other situation where confusion might exist over the precise article title. Redirects are not included in the official article count, but lists and disambiguation pages certainly are.
Further Reading[edit]
- The auto-generated statistics page that gives the current article count
- A page with other statistics and interpretations
- A list of historical milestones for the projects
-? An FAQ page that describes what an article is
-'s_oldest_articles A list of some of Wikipedia's oldest articles
Article and Content Inclusion Policies[edit]
When people find out that anyone is allowed to add content to Wikipedia, they often assume that any type of content can be added and in any fashion. But in reality, editing and writing on Wikipedia is constrained by a kaleidoscopic array of rules, or policies (these are discussed fully in Chapter 13, Policy and Your Input).
Like a traditional encyclopedia, Wikipedia doesn't accept just anything, although its inclusion policies are clearly much broader than those for most encyclopedias. Articles are only kept on Wikipedia if they meet specific criteria.
Wikipedia has tried to filter out unencyclopedic material by codifying and abiding by general content policies, rather than by creating a list of approved topics ahead of time. What can be added to the encyclopedia is not laid down in advance, but is decided according to some basic principles worked out in the early days.
Policies determine both the kinds of topics that are acceptable and the way in which those topics are treated. If properly applied, the policies are designed to result in a fair treatment, no matter how contentious the topic. If policies cannot be conformed to—for example, if there are no reliable sources about a topic—then an attempt to create a good Wikipedia article for that particular topic may fail. Whether someone likes or dislikes the topic itself, however, should not have any bearing on whether an article is included. In other words, the only limit on what appears in Wikipedia is whether an article can be written that complies with all of the content policies.
No one in particular has the job of deciding whether an article is suitable for Wikipedia. Rather, contributors submit new pages to the site directly, and they go live immediately without intermediaries. Other contributors then review these articles. Large numbers of new articles are deleted every day, but new content that conforms to the content policies is kept. (See Chapter 6, Good Writing and Research for how to start a new article and Chapter 7, Cleanup, Projects, and Processes for how articles are deleted.) A new article may also be edited quite savagely to make it more suitable for keeping. An editor who inserts content that falls outside the policies, or removes content that is within them, is not furthering the aims of the project.
Although there is generally broad agreement on these policies, they rely (as with all things on Wikipedia) on editors actually applying them. If you find content that seems to violate these guidelines, it often means that no one has gotten around to fixing it yet.
Core Policies: V, NOR, and NPOV[edit]
Three policies are so central to Wikipedia's workings that the encyclopedia would be unrecognizable (or nonexistent) without them. These core policies are Verifiability (V), No Original Research (NOR), and Neutral Point of View (NPOV). In broad strokes, they form the framework in which content is created and edited on a daily basis with no top-down editorial control.
From the outset, Wikipedia was committed to a Neutral Point of View (NPOV). This policy is similar to what journalists mean by objectivity in reporting.
As time went by, contributors became more determined to keep out guesswork and rumors, so Wikipedia needed a policy that promoted fact-checking. This principle is now formulated as verifiability from reliable sources.
With Wikipedia's growing popularity, there was also a basic need to prevent Wikipedia from being used as a soapbox to spread new ideas that someone had just thought up (euphemistically referred to as original research). The No Original Research (NOR) policy says that ideas and facts must be previously published elsewhere by a third party before they are documented in Wikipedia.
Policies Are Important
Most of Wikipedia's policies began as temporary solutions to disputes or other problems. Because they worked well and proved robust in so many contentious areas, they became universal across the encyclopedia. The practical application of these policies is open to some interpretation, but if a Wikipedia contributor has major disagreements with these policies even in theory, that contributor will probably not be happy on Wikipedia.
Policies vs. Guidelines
There is a distinction between a policy, which is mandatory, and a guideline, which is advisory. Guidelines are more complex rules that help to keep Wikipedia's quality high. The three core content policies are supported by a host of associated guidelines, which will be discussed as we go along. These guidelines include the concept of notability and various principles defining the boundaries of Wikipedia's coverage.
In outline, each of the major policies is apparently simple enough. The unpacking of their implications is another matter. Imagine, if you can, an article about a rock band that is neutral about drug abuse and explicit lyrics, that only reports published documentation on trashed hotel rooms and the influence of The Smashing Pumpkins, and that cites its references in footnotes as assiduously as any doctoral dissertation. You are coming close to the distinctive Wikipedia voice.
Understanding the Policies[edit]
Verifiability (Wikipedia:Verifiability, shortcut WP:V) means that you should always be able to verify that the content of a Wikipedia article is factual, using reliable outside sources that are cited within the article. The Verifiability policy exists to make Wikipedia more accurate. Misremembered facts, casual writing, and gossip should not be included in articles.
In a perfect article, any major statement of fact is attributable to a source outside of Wikipedia, no matter which editor (anonymous or not, expert in the field or not) added the information. References in Wikipedia are explicitly cited, which is different from many traditional encyclopedias. Those works are written by small groups of experts, but because Wikipedia is open to everyone who wants to contribute, even anonymously, it is correspondingly important to be sure that an article's statements can be confirmed by reliable outside sources.
If a topic has never been discussed by any reliable, third-party sources, the Verifiability policy dictates that Wikipedia should not have an article about that topic. Writing the article should be put off until better sources have been published outside Wikipedia. (A lack of published sources might also indicate that the topic is only of interest to a few people; see "Other Guidelines" on Section 2.3, “Other Guidelines”.)
In practice, being able to verify information from other sources is very useful, even on apparently minor points. And when an article provides a list of sources, it becomes a convenient jumping-off point for further research.
Aside from benefiting readers, the Verifiability policy also simplifies things for Wikipedia editors by giving them a clear question to ask when evaluating an article's quality: Is this statement reflected in outside sources?
Though Verifiability is a core policy, it has yet to be fully implemented, and thousands of articles are tagged as being unreferenced (see Figure 1.4, “This is the template message for articles that don't cite any sources, which is a key part of complying with the Verifiability policy. These messages are meant to warn readers and alert editors that the article is unfinished.”). Verifiability is applied as a general principle. In practice, the ability of editors to verify a statement may depend on, for example, having access to a good library (a major concern in many developing countries). A fact should only be included if checking its accuracy is at least possible in theory; for important true statements, sources can almost always be found with time.
Figure 1.4. This is the template message for articles that don't cite any sources, which is a key part of complying with the Verifiability policy. These messages are meant to warn readers and alert editors that the article is unfinished.
You will certainly see unreferenced content on Wikipedia. Some of this content remains unsourced simply because sourcing is hard work, and Wikipedia is a work in progress. But some content clearly violates the idea of verifiability (for example, anything that is contentious and badly referenced or that really couldn't be referenced, such as things said in a private conversation). This material may be challenged and ultimately removed. (For more discussion on referencing style and sourcing, see Chapter 6, Good Writing and Research.)
No Original Research (Wikipedia:No original research, shortcut WP:NOR) means that all concepts and theories in Wikipedia articles should be based on previously published accounts and ideas. Wikipedia articles shouldn't contain original ideas, conclusions, descriptions, or interpretations of facts. Nor should they contain editors' personal views, political opinions, or any unpublished analysis of published material.
If you have something innovative to say, Wikipedia is not the right place to present it to the public. In other words, if you have performed an experiment, thought of a philosophical argument, or developed a mathematical proof—good for you! But this content doesn't belong in the encyclopedia unless your work has already been published somewhere else (ideally in a peer-reviewed and scholarly source).
Reliable Sources
Inevitably, there is much debate within the project about what exactly a reliable source is; this debate has gradually produced a guideline called Reliable Sources (which clarifies the Verifiability policy). It lists a wide variety of possible types of sources and naturally includes traditional scholarly books and articles. Certain websites do qualify, but self-published sources such as blogs usually do not. While source criticism (the picking of holes in the reputation of sources) should mostly be left to experts in a particular area, the meaning of the guideline is evident enough: Wikipedia aims to produce accurate, serious reference material, and the sources upon which it bases its facts must, therefore, be as reputable as possible. See Wikipedia:Reliable sources (shortcut WP:RS).
The initial motivation for the No Original Research policy was to prevent people with unconventional personal theories from using Wikipedia to draw attention to their ideas. These days, No Original Research is consistently used against the inclusion of material that is in no sense crackpot but is simply too novel for Wikipedia. Articles may also be tagged as possibly containing original research if it is suspected that material in them comes from an editor's personal experience, rather than verifiable sources (see Figure 1.5, “Article template message indicating concerns over violations of the No Original Research policy”).
Figure 1.5. Article template message indicating concerns over violations of the No Original Research policy
NOR also means that editors should not be tempted to provide historical interpretations or draw conclusions, even if they seem self-evident, without citing supporting outside sources giving the same interpretations. One consequence is that historical articles tend not to end with overall summary assessments of people or events. Conclusions from historians can be cited, but if two historians disagree, there should be no authorial attempt to reconcile the views; both sides should be given and the readers left to draw their own conclusions. Some pattern may exist in the facts, but it is not for Wikipedia to break this to the world. If someone else points it out, it can be mentioned and attributed.
Verifiability, Reliable Sources, and No Original Research clearly have something in common. In Wikipedia, both facts and opinions must be based on and referenced to outside information and ideas that have already been published. There is ongoing discussion on whether these principles can be summarized together under the idea of attribution.
Neutral Point of View (Wikipedia:Neutral point of view, shortcut WP:NPOV) means that all points of view about a particular topic should be fairly represented. NPOV is one of the oldest, most respected, and most central policies on Wikipedia. A neutral article makes no case and concentrates on informing the reader by providing a good survey of its topic. It is fair-minded and accurate and deals with controversial matters by reporting the main points where there is disagreement.
From the reader's perspective, the effect of neutrality should be this: An article on a contentious topic, such as a historical event that is seen differently by various groups, should not reveal where the article author stands on the matter. In almost all cases, such an article will have been worked over by a group of editors, and their opinions should not come through. Although the example of a rock band was given previously, there are more serious topics where maintaining a neutral point of view is not easy to apply. Consider a neutral treatment of slavery, communism, the history of Ireland, or abortion. Each of these has to be treated on a scrupulous basis, with proper weight given to all sides of the story. The discussion of rival opinions should be in a tone containing no sympathy or bias, regardless of the topic.
Neutral articles should also be comprehensive, though they don't have to be all-inclusive. All significant views should be provided or outlined, however. The reasons why a particular view is popular should be given in fair summary, but the overall expression in an article should not be slanted. NPOV doesn't mean that minority views must be written about with equal coverage to majority views, particularly when there is a wide disparity in their acceptance; points of view should be written up proportionately. Small minority views, such as "the Earth is flat," can be treated briefly, or in some cases omitted as being below Wikipedia's natural threshold of attention. There is no doctrine of equal time. In fact, to give all views equal coverage regardless of their outside acceptance is in itself an act of editorializing. The same goes for what facts or incidents are emphasized in an article; a scandal, rumor, or conspiracy theory may be included (if properly sourced), but shouldn't be given unwarranted headline status. Wikipedia is not tabloid journalism.
Using a neutral point of view, all sorts of controversies can be handled. An article should never directly include opinion within the text: "Coke is much better than Pepsi" is the wrong approach. Rather, the statement should be neutral, indirect, accurate, and specific. For example, it is acceptable to write "according to a 2006 Taste Tester's poll published in Taste Testers Monthly, 52 percent of taste testers found Coke to be better than Pepsi," with a full citation to the article being referred to. (This is a fabricated quote, by the way. See New Coke for some real quotes.) Of course, neutrality also rules out all sorts of propaganda tricks based on selective quotation.
NPOV also comes to the rescue where sources differ on the facts. Editors are often faced with contradictions in the historical record or factual matters; for example, whether person X was a nephew or a son of person Y. Both claims can be included. According to Verifiability and Neutral Point of View, this disputed factual point should appear as "Source A says X was the nephew of Y, whereas B says X was the son of Y," with references. According to the No Original Research policy, the matter should be left there, and if source C publishes some new evidence, this should then be added. Wikipedia is not a court in which verdicts are reached, and editors should not attempt to figure out the "right" answer themselves; an article may simply present the evidence, fairly and at adequate length, for the reader to consider.
Following NPOV means that advertisements, press releases, and other promotional materials aren't welcome on Wikipedia because these are inherently non-neutral. This may sound fairly obvious, but it affects the community's acceptance of other sources as well. For example, text from promotional websites for companies or schools, which are often used for sources, is often non-neutral and should be considered carefully before being cited.
In addition to making advertising unacceptable, NPOV is also a prime reason why editors are strongly discouraged from working on articles about themselves or their organizations. Except for basic factual corrections, it really is difficult to be neutral about yourself. (Also remember that any statement in an article, even if it's about a subject you know as intimately as your own life, needs to be backed up with a citation to an outside source because of Verifiability and No Original Research. Wikipedia should never be used for promotion.)
Editing Scandals
Some violations of the NPOV policy have been high profile; for instance, it was discovered that staffers for a politician were editing that politician's biography to be more favorable and removing uncomfortable facts. Naturally, this violated the Neutral Point of View policy. On January 27, 2006, the Lowell Sun reported on the Wikipedia article about an American politician, Representative Marty Meehan. It claimed that an anonymous editor, with an IP address traced to the House of Representatives offices, had been at work erasing mention of the congressman's broken term-limit promise. This then became a national news story.
All of the content policies, but particularly NPOV, affect Wikipedia's style and the way its text is worded. Disputes about NPOV often end up on the Talk Page of the article (discussed in Chapter 4, Understanding and Evaluating an Article); if there is heavy debate about a topic in evidence, an editor may flag the article as being involved in an NPOV dispute (see Figure 1.6, “Article template message indicating concern that the tagged article does not have a neutral point of view”).
Figure 1.6. Article template message indicating concern that the tagged article does not have a neutral point of view
Other Guidelines[edit]
Along with the three core policies discussed in the previous section, a handful of other guidelines help determine what content is included in Wikipedia.
Notability[edit]
Wikipedia should only cover topics considered noteworthy in the outside world, as determined by reliable, independent secondary sources. Notability helps set a baseline level for inclusion to prevent Wikipedia from becoming something other than an encyclopedia. In practice, the lack of notability is the most common reason why a topic is deemed unsuitable for a Wikipedia article.
This concept is distinct from "fame," "importance," or "popularity," but it does mean there shouldn't be articles about topics that are of interest only to a very few people or of such local interest that there are no publications about them. In other words, an article should not be about your pet or your house (unless either of these is particularly well known and has been written about previously).
Notability is easy to think about superficially but difficult to apply or cleanly define in the abstract. A feeling for notability requires a practical sense of the relative significance of topics in a field, and it also requires a scholarly sense of which types of sources determine notability. An encyclopedist has to wrestle with weighing the extent and quality of information available on a topic. To take one example, King Edward V of England, one of the princes in the Tower whose reign was cut short when his uncle, Richard III, took the throne, is clearly notable, even though much that has been written about him and his fate is speculative.
In part because of this ambiguity, Notability is much more controversial and open to debate than Verifiability, No Original Research, and Neutral Point of View, but it is also closely related to these policies. Arguments about it may be tortuous in the abstract, but in practical terms, non-notable articles are deleted from Wikipedia over time.
There are separate notability guidelines that have been set up for various controversial areas, such as actors and actresses, websites, companies, musical groups, videogames, and so on; these guidelines may be found through links on the main notability page. Many of these guidelines are in place to help reinforce the idea that Wikipedia is not a promotional service, and most of them fall back on whether there are any reliable secondary sources to be had and the amount of documentation available on a topic. For example, if Alice has a website that gets thousands of hits a day, but no one has written about it in any sort of publication, Bob will likely not be able to write a successful Wikipedia article about Alice's site that doesn't get deleted by other editors as being non-notable, or with the short dismissive comment nn.
Similarly, suppose Carla hopes to write about her favorite band, which is much beloved locally but has no major music press. Not only would writing a neutral article be difficult, but also there are no reliable published sources that Carla can use (even if she knows the band's history first-hand).
As in the previous example, notability is something that should be considered in relation to each individual article, rather than whole classes of topics. Some musical groups are certainly notable, as are some companies and some videogames; others are not. The notability guidelines help sort this out.
On the other hand, there are inherent problems with the idea of notability which have led to many ongoing debates over the years on how to phrase and apply the guidelines. Here are some caveats to keep in mind regarding notability:
- Notability may be perishable. Some topics are ephemeral in their interest, such as Internet memes and celebrities in the "famous for being famous" category.
- On Notability: Notability is something that is judged by the world at large, not by Wikipedia editors making personal judgments. If multiple people in the world at large who are independent of the subject have gone to the effort of creating and publishing nontrivial works of their own about the subject, then they clearly consider it to be notable. Wikipedia simply reflects this judgment. (Adapted from User:Uncle G/On notability)
- Notability is not the same as having a fan or someone taking time to research a topic in depth; there must be multiple independent sources.
- The availability of accessible literature in English on any given subject can distort perceptions of notability; biographical facts, in particular, are unevenly accessible, leading to systemic bias, which will be discussed in Chapter 12, Community and Communication.
- Notability is not distinction. It might arise from scandals or participation in controversies, as well as from recognized work such as writing a book.
- Notability in a field is not the same as reputation. Wikipedia will, for example, include cranks who are now discredited but became famous for some reason, but omit solid scientists who are simply not well known.
On that last point, it is obviously flawed to assume that if there's no Wikipedia article, the subject is not notable. Wikipedia is a work in progress, and many worthwhile potential articles have not yet been written.
To sum up, writing a verifiable article without good sources is a bricks-without-straw exercise, and the presence or absence of sources helps determine notability. Thinking about notability helps to keep the project encyclopedic. The notability guideline as applied probably still errs in the direction of inclusion, with a bias toward lesser topics that are well documented elsewhere. This is a natural consequence of a policy evolution that has made reliable sources ever more central.
From Wikipedia:List of really (shortcut WP:DUMB). For a more serious version, see Wikipedia:List of bad article ideas (shortcut WP:BAI).
As with other publications and organizations where writing is submitted, plagiarism is not allowed. In addition, any materials submitted to Wikipedia must be specifically licensed under the GNU Free Documentation License (GFDL), which is a "free license" (see Chapter 2, The World Gets a Free Encyclopedia) distinct from traditional copyright. This license means that anyone can reuse and redistribute Wikipedia's content for any purpose without asking permission, as long as they meet certain conditions; Wikipedia content can be used on other sites or even republished in print.
For these reasons, materials taken from other places generally shouldn't appear on Wikipedia. You shouldn't take text or photos from the Internet or elsewhere and reproduce them on Wikipedia without explicit permission; copying any work that is not in the public domain or explicitly licensed as being freely available is a copyright violation.
Additionally, material that was not originally written for Wikipedia (such as a term paper) typically doesn't meet the other content guidelines. It is best, in almost all cases, to simply write the article afresh.
Non-encyclopedic Content[edit]
Some non-encyclopedic content is inappropriate for Wikipedia but may be welcome on other sister Wikimedia projects. For instance, definitions of words (without supporting encyclopedic information) are outside of Wikipedia's scope. The jargon used to describe such articles is dicdef, short for dictionary definition. A dictionary definition alone isn't sufficient for a Wikipedia article. However, dictionary definitions are very welcome at Wiktionary, Wikimedia's free dictionary project.
Original reporting of events is also not a part of Wikipedia. You may have been an eyewitness to an event, but writing what you know you saw straight into the encyclopedia probably violates the No Original Research or Verifiability policy. Wikipedia must wait for the mainstream media to report the facts, which it can then collate. On the other hand, original reporting is part of the mission of Wikinews, which is a citizen journalism project.
Similarly, a "how-to" article may not be encyclopedic, but would be just fine over at Wikibooks, Wikimedia's project to write free textbooks.
Original source documents (for example, the text of Coleridge's "Rime of the Ancient Mariner") are not welcome on Wikipedia, but that is because primary sources belong on Wikisource.
These sister projects are fully described in Chapter 16, Wikimedia Commons and Other Sister Projects.
What Wikipedia Is Not[edit]
It's sometimes helpful to think about content inclusion guidelines in negative terms. Here is the basic consensus about what Wikipedia is not (adapted from Wikipedia:What Wikipedia is not, shortcut WP:NOT). Taken together, these statements usefully define boundaries applied to Wikipedia's content. They also exist as longer formulations spelled out in policies and guidelines.
- Wikipedia is not an indiscriminate collection of information, a directory, or a dictionary.
It's an encyclopedia (and preferably a well-rounded one) in which criteria such as notability are used to weed out entries. For example, an article titled List of bands beginning with the word "Lemon" was exactly what its title implied: a simple list, without analysis or context, that named the Lemonheads, Lemon Jelly, and a few other bands. It was quickly deleted. Articles on Wikipedia ought to serve some purpose. They should provide something recognizable as "information," concerning something recognizable as a "subject."
On a similar note, Wikipedia doesn't strive to be a Who's Who or a catalog of published works. Family trees and other family histories are not stored on Wikipedia, as much family history is considered "indiscriminate": Being related to someone notable doesn't make a person notable (with the exception of royal families and others where the hereditary principle matters).
- Wikipedia is not a paper encyclopedia.
In particular, Wikipedia does not need to worry about printing costs or physical unwieldiness. It doesn't need to shorten or triage articles to conserve space. As long as there is money to buy servers and bandwidth, there are no physical restrictions on growth.
The implications for coverage are major: "Not worth including" is a decision that need not be made quite as often. This is another reason Wikipedia's model is a dramatic change from earlier encyclopedias. As long as articles conform to the site's other guidelines, specialized or minor articles can be included. Wikipedia has no set restrictions on what branches of human knowledge should be included.
- Wikipedia is not a publisher of original thought, nor a soapbox.
This reiterates the policy of No Original Research: Wikipedia is not interested in personal essays. Indeed, it's a bad platform on which to air personal or political views. If you're looking for a way to get your name and opinions online, many free website and blog providers exist. Reviews of products, companies, and other personal opinions—whether positive or negative—are likewise unwelcome in Wikipedia articles. These are better placed on a website dedicated to reviews.
- Wikipedia is not a mirror, repository of files, a blog, webspace provider, or social networking site.
This might seem like a strange point to make as it is directed not at Wikipedia's articles but at its user pages, the pages editors create for their own working space. (We will cover user pages in Chapter 11, Becoming a Wikipedian.) Anyone can come along and create a user page, but Wikipedia only supplies this working space to allow editors to identify themselves and collaborate more effectively—not to back up unrelated files, publish a blog, or find a potential mate. Wikipedia is a project with a very specific purpose—to create and distribute an encyclopedia. It is not a helpful web application for storing other unrelated information.
- Wikipedia is not a crystal ball.
This is a warning about posting rumor and speculation about future events, such as gossip about films that are currently in production. If it hasn't happened yet, it isn't Wikipedia material (though as with all guidelines, this should be interpreted using common sense: It doesn't mean that the article on the 2012 Summer Olympics should be started only when the opening ceremony gets under way).
- Wikipedia is not censored.
Articles aim at a general and educated adult audience, and Wikipedia is neither simplified, nor is it compiled with regard to the needs or protection of children. While content is intended to be factual, it is also frank, and human sexuality is extensively covered. Religion is treated along the same lines as all other content. Some images in the encyclopedia may be disturbing or shocking.
Thus, some content may be considered offensive or inappropriate for young children. Understandably, this lack of censorship can cause distress—there are many hundreds of articles about topics that many people would prefer not to think about. Considering that the aim is to be a repository of all human information, written by a truly diverse group of people from all over the world, this is unavoidable. And given the policies of Neutral Point of View and Verifiability, Wikipedia is often an excellent source for information on controversial or potentially offensive topics.
Note: Wikipedia, however, should certainly not contain anything defamatory toward individuals. w:Wikipedia:Biographies of living persons (shortcut WP:BLP) sets down strict conditions of inclusion for articles about people. Verifiability and NPOV apply to all topics and are firmly enforced in cases where real lives may be affected. If, by misfortune, you do feel defamed, turn to "Help, an Article About Me Is Incorrect!" on Section 2.4.1, “Help, an Article About Me Is Incorrect!” for specific complaint advice.
- No Blue Pencil, No Free Speech
"No censorship of topics" does not mean that other inclusion policies and behavioral guidelines for onsite interactions can be ignored. Though broadmindedness is highly valued on Wikipedia, nowhere in the policies is there anything about free speech. The site is designed as an encyclopedia project, not as a general forum.
- Wikipedia is not static.
Articles are never set in stone. The encyclopedia is an open-ended work in progress, and Wikipedia articles are, by definition, always provisional. Even the best articles aren't considered off limits for further improvement. This attitude reflects a shared view of knowledge as something that by its nature is dynamic and expanding, rather than settled.
This final point is often left unspoken, but it is key. Changes can always be made, articles can always be improved, and there is always something else to do.
- Further Reading The NPOV policy The NOR policy The Verifiability policy The Notability guideline Various notability guidelines for specific subjects Guideline for judging reliable sources The policy on what Wikipedia is not
Non-article Content[edit]
All pages on Wikipedia are of two types: About two million articles constitute the encyclopedic content, but ten million project-related pages also exist. What are these pages? Will you see them if you just look something up? If you find them when using a search engine, should you ignore those hits?
Wikipedia's readers should recognize that some Wikipedia pages are not articles, but they do not need to have any particular understanding of the non-article pages and can ignore them freely. On the other hand, involved editors should understand the different types of pages—their purpose and the way they help grease Wikipedia's wheels. The project-related and administrative pages are not as glamorous as articles, but they're of no less importance when it comes to understanding what happens in practice on the site.
3.1. Types of Non-article Pages[edit]
These extra pages come in several varieties. Non-article pages are devoted to the administration of Wikipedia, discussion of article content, technical infrastructure, descriptions of images, and the Wikipedia community.
Although they are not as widely known as articles, two of these page types—discussion pages and user pages—are actually the easiest places to start participating on Wikipedia.
- Talk pages
Every article is coupled with a talk page (also called a discussion page), which is accessed by clicking the Discussion tab at the top of the screen. Here editors ask questions about the article's content, propose changes, display notices for other editors, and discuss technical matters (like the title of an article and whether an article should be split into pieces or combined with another).
Each discussion page is meant only for discussing the article it is linked to. Despite the name, discussion pages are not forums for general discussion of the article's subject.
A discussion page is attached to almost every non-article page as well. (Discussions about Wikipedia policy tend to range more widely than discussions about individual articles, but still remain somewhat tied to the topic of the attached page.)
For more on talk pages, see Chapters Chapter 4, Understanding and Evaluating an Article, Chapter 11, Becoming a Wikipedian, and Chapter 12, Community and Communication.
- User pages and user talk pages
User pages are for individual editors (users) to describe themselves in whatever detail they see fit. By custom, they are set aside as a private space where editors can work. Often, editors will list projects they're a part of and articles they've worked on.
User talk pages, like article discussion pages, can be reached by clicking a tab at the top of the screen. To communicate with each other, editors leave notes on user talk pages. Whenever someone leaves a note on your user talk page, Wikipedia's software notifies you. (You'll find more on setting up user pages and leaving messages in Chapter 11, Becoming a Wikipedian.)
The other kinds of pages are typically used as references and project coordination pages.
- Policy pages and guidelines
These pages provide guidance about editing content and interacting with other volunteers. Policies and guidelines lay out stylistic guidelines for editing, content inclusion policies, procedures to resolve disputes, and much more. Policies will be described further in Chapter 13, Policy and Your Input.
- Community discussion, procedural, and project pages
These pages are where the community discusses proposals and coordinates editing projects. Routine procedures, such as deletion discussions, are usually based on policies and are carried out on special procedure pages. These processes will be described more in Chapter 7, Cleanup, Projects, and Processes. On Wikipedia what the community means tends to vary according to context—after all, the site is open to all comers—but often enough, it implies those who take part in these open-forum discussions.
- Help pages
These pages include documentation of editing syntax, technical procedures, and best practices, and are referenced throughout this book.
- Image description pages
Each image is coupled with an image description page. These pages exist to provide the image with a textual description (metadata).
- MediaWiki-generated special pages and administrative pages
These are pages generated on the fly by the MediaWiki software and serve as utilities rather than editable pages. They are used for special lists and essential pages, such as the account creation pages.
Namespaces[edit]
Each type of page is distinguished from every other type (including from articles) by a prefix; for example, discussion pages are prefixed with Talk:. This prevents "collisions" between similarly named pages, for example, Sorting, which is an encyclopedia article about the process of arranging items, and Help:Sorting, which is not an encyclopedia article but instead offers technical assistance about the sortable tables found on some Wikipedia pages.
Each prefix is actually an indicator that the page is inside a particular namespace. (A namespace is a kind of container for different types of content.) For example, in this full Wikipedia URL
Talk indicates the namespace where the page exists, whereas Benjamin Franklin, separated from the namespace with a colon (:), is the page's name. If you were internally linking to this URL, you'd use the combination of the namespace and page name to properly indicate what page you meant: Talk:Benjamin Franklin.
Articles, which exist in the so-called main or article space namespace, do not have prefixes:
Benjamin Franklin is the full page name; the absence of a prefix tells you the page is an encyclopedia article.
All other types of content in Wikipedia exist in one of the other namespaces, which are indicated with one of 19 possible prefixes. Seeing a prefix before a title tells you that the page is likely part of the community or administration of the site (and therefore is not subject to the same content guidelines as articles).
The namespace also provides context and indicates the type of content that a page contains. For example, help pages contain technical documentation, rather than (say) encyclopedia articles or policies.
Although two pages in the same namespace cannot share a title, pages can exist under the same "name" in different namespaces. For example, the article Phoebe is about a personal name and is part of the encyclopedic content of the site. It is not the same thing at all as the page User:Phoebe, which exists in the User namespace and describes an editor who uses this name as a pseudonym.
The lines between encyclopedia content, on the one hand, and the Wikipedia community pages, on the other, are extremely clear and are delineated with the use of namespaces. As implemented on Wikipedia, community namespaces do not always exactly correlate with a single specific type of content. For instance, whereas only user pages are in the User namespace, you may find various pages such as technical documentation, community projects, and policies in the Wikipedia namespace. All of these pages, however, will have something to do with the running of Wikipedia.
All Pages in a Namespace
To scan a list of all of the pages in a namespace, click Special Pages in the Toolbox menu on the left-hand sidebar. At the top of the list that appears is the entry All pages. Click that, and a pull-down menu (to select a namespace) and a search box appears. The namespace listing will start at whatever spelling you place in the search box, something very necessary because several namespaces contain millions of pages. (Adapted from Wikipedia:Tip of the day/October 25, 2006)
List of Namespaces[edit]
Wikipedia has 20 built-in namespaces. These occur in pairs (for example, User and User_talk); there are nine such pairs, including the main namespace, where page names have no prefix, and two special namespaces, Special and Media. A namespace prefix must be kept when linking to a page. The prefix always comes before the page name and is separated from it with a colon.
- MediaWiki
Wikipedia runs using MediaWiki software, so all other wikis running on MediaWiki have these namespaces as well. Wikipedia adds two custom namespaces that do not exist on other wikis (Portal and Portal_talk) and has the Wikipedia and Wikipedia_talk namespaces, which may be appropriately renamed on other wikis.
For reference, the following namespaces exist:
- The main or article namespace has no special prefix. This namespace is where all regular articles (all the "encyclopedic" pieces of the encyclopedia) exist. Pages in this namespace can be linked to internally with simply their name: [[pagename]].
- The Wikipedia namespace is what could be called the project page namespace. It is for pages that are specifically about running Wikipedia and meta-level subjects related to the project. For example, the Community Portal can be found at Wikipedia:Community_portal and is meant as a place for the Wikipedia community to gather; Wikipedia:Statistics and its talk page, Wikipedia_talk:Statistics, are meant for describing and discussing the project's statistics. Policies, procedures, guidelines, community projects, and many help pages all exist within the Wikipedia namespace. The Wikipedia namespace may sometimes be abbreviated to WP, enabling shortcuts to be set up. For instance, WP:ARB redirects to Wikipedia:Arbitration_Committee.
- The User namespace refers to user pages or pages that have been set up by individual editors to describe themselves, for example, User:Jimbo Wales. By custom, your user page is available when you register a username.
- The Help namespace refers to basic documentation and help pages for using and editing Wikipedia. The prefix for these is simply Help:. Most of the project documentation pages are here or in the Wikipedia namespace.
- The Category namespace is a major part of expertly using Wikipedia; we discuss categories at length in Chapter 3, Finding Wikipedia's Content and Chapter 8, Make and Mend Wikipedia's Web.
- The Image namespace is prefaced by Image: and is used for describing and attributing images (for example, Image:White shark.jpg). If you upload any image or other media file to Wikipedia, one of these pages will be created. The Media namespace is prefaced by Media: and is used for a link directly to a media file, rather than its description page. Details are in Chapter 9, Images, Templates, and Special Characters.
- The Template namespace is prefaced by Template: and is used exclusively for templates that are transcluded or substituted into an article. You'll find more on templates in Chapter 9, Images, Templates, and Special Characters.
- The Portal namespace is for portal pages that collect articles on a particular topic; this is special to Wikipedia and not generally for MediaWiki. For more on portals, see Chapter 3, Finding Wikipedia's Content and Chapter 7, Cleanup, Projects, and Processes.
- The Talk namespaces contain all the discussion pages. Except for special pages, every namespace has an associated Talk namespace, designated by adding talk: after the normal namespace prefix. In this book, we write these compound names with an underscore to be clear, but you can always use a space. The Talk namespace associated with the main article namespace simply uses the prefix Talk:, for example, Talk:Mathematics. The Talk namespace associated with the User namespace, however, has the prefix User_talk:. Similarly, Wikipedia namespace discussion pages are in the Wikipedia_talk namespace, so the discussion page for Wikipedia:No original research is at Wikipedia_talk:No original research. Generally, pages in the Talk namespaces are used to discuss changes to their corresponding page; however, pages in the User_talk namespace are used to leave messages for a particular user. The User_talk namespace is special in that, whenever a user's talk page is edited, that user (if logged in) will immediately see a message informing them that they have new messages.
- The Special namespace refers to pages that are autocreated by the site's software on demand. These pages are not editable in the usual way and are generally either tools or automatically generated variable lists, such as a list of all pages on the site. See Help:Special page for a list.
- The MediaWiki namespace is used for certain site messages along with a few other areas to define shortcuts and other text strings used around Wikipedia (for example, MediaWiki:Disclaimers). These pages are not usually editable by users.
- Further Reading
- An article about MediaWiki with a good explanation of namespaces
- The help page on namespaces
- A description of each Special namespace page
Summary and What to Read Next[edit]
Wikipedia contains a staggering volume and remarkable variety of content, ranging from traditional encyclopedic subjects to articles about popular culture and technical topics.
Even so, every Wikipedia article must meet several criteria related to the site's mission. The most important criteria are the three core policies: Verifiability (V), No Original Research (NOR), and Neutral Point of View (NPOV). A number of further guidelines and corollaries to the major policies, particularly the notability guideline, help define what you should find in Wikipedia and what types of articles are acceptable.
Although there are now over two million articles in the English-language Wikipedia, there are even more pages devoted to the administration and community of the site. These pages, none of which are part of the Wikipedia encyclopedia, include discussion (or talk) pages; user and user talk pages; policy, procedure, and help pages; project administration and community discussion pages; image description pages; and MediaWiki-generated special site-related pages. All of these different kinds of pages are differentiated from each other by namespaces, which are indicated with prefixes that are separated from the page's name with a colon. Articles reside in the main or article namespace and have no special prefix.
In the next chapter, we'll discuss the origins of Wikipedia and how three disparate historical strands—wikis, encyclopedias, and free software—came together to influence the site's development. Skip to Chapter 3, Finding Wikipedia's Content to explore the structure of Wikipedia and learn better ways to search and browse the site or to Chapter 4, Understanding and Evaluating an Article to learn how to evaluate an individual article. | https://en.wikibooks.org/wiki/How_Wikipedia_Works/Chapter_1 | CC-MAIN-2016-40 | refinedweb | 8,775 | 51.18 |
Introduction to Method Overloading in Python
Method overloading is a unique methodology offered by Python. Using this feature, a method can be defined in such a manner that it can be called in multiple ways. Every time, a method is called, it depends on the user as to how to call that method i.e. how many and what parameters to pass in the method. So, the method can be called with no or single or multiple parameters. Such a functionality offered by Python is termed as Method Overloading in Python. The feature allows the use of methods to be regulated in a user-defined manner.
Syntax:
class class_name:
def method_name(self, name = None)
# method code
# Create object instance
object_variable = class_name()
# Call the method
object_variable.method_name()
# Call the method with a single parameter
object_variable.method_name(parameter)
# Call the method with multiple parameters
object_variable.method_name(parameter1, parameter2 )
How Method Overloading works in Python?
As described earlier, method overloading refers to the fact that a single method can be used in multiple ways, by passing the variable number of parameters. E.g. we may want to return a product of specified parameters. The number of parameters can be two or three or just a single parameter.
Examples of Method Overloading in Python
Given below are the examples of Method Overloading in Python:
Example #1
object_multiply.mult_num(20)
# Two variable method execution
object_multiply.mult_num(20, 30)
# Three variable method execution
object_multiply.mult_num(20, 30, 40)
Output:
In the above program, we have class multiply. The class has a method called mult_num. this method takes three numbers as parameters. Using the if-elif-else statement it is checked any of the numbers equals to None. When only a single number is passed as a parameter, the program simply prints that number. When two numbers are passed as parameters, the program returns the product of these two numbers, and when three numbers are passed as parameters the program returns the product of the three numbers. So, we find a program being used in a manner as required by the user. Any in every situation the method does its job correctly.
In the above program code, the parameters were passed as hard-coded values, and so, we got the output as shown below.
Example #2
We shall modify the above program code slightly in the sense that we take the inputs from the user.
print("Single variable operation")
object_multiply.mult_num(float(input("Enter the number: ")))
# Two variable method execution
print("Two variable operation")
object_multiply.mult_num(float(input("Enter the first number: ")), float(input("Enter the second number: ")))
# Three variable method execution
print("Three variable operation")
object_multiply.mult_num(float(input("Enter the first number: ")), float(input("Enter the second number: ")), float(input("Enter the third number: ")))
Output:
We can see those input parameters are no more hard-coded. They can be passed dynamically. The default data type of the input variables is a string, so, we need to convert it into a number first. As a result of this, we used the float function for type conversion. Let’s see how the program code works when executed.
Initially, the single variable operation gets performed. We passed 12.5 as the input and got the same number as output. Then we are moved to “Two variable operations” as shown in the below screenshot.
In “Two variable operations”, we specified the first number as 13.5 and the second number as 19.7 as can be seen in the below screenshot.
Just go through the screenshot below, and see that this time we have got the product of two decimal numbers which we have passed as input parameters. The product is 265.95. As soon as the two variable operation gets performed, we are moved to “Three variable operations” as shown in the below screenshot.
Finally, we can see that the program code returns the product of three variables as can be seen in the bottom section of the output.
Through the above programs, we saw how the concept of method overloading works in Python. What basically, we do is that we create an object which we assign to the class. The class object as an operator operates over the method defined in the class. The object regulates the use of function based on the requirement.
Through the above two examples, we worked on numeric data. Now, we go through a program that demonstrates the concept of method overloading through string variable example.
Example #3
The program code is as written below. Go through the code and see how each of its components work.
Code:
class Name:
def name_declare(self, name1 = None, name2 = None):
if name1 is not None and name2 is not None:
print("Hello, I am " + name1 + " " + name2+".")
elif name1 is not None:
print("Hello, I am " + name1 + ".")
else:
print("Hi, How are you doing?")
object_name = Name()
# method execution by passing both name and surname.
object_name.name_declare("John", "Maurice")
# method execution by passing just name.
object_name.name_declare("Matthews")
# method execution by not passing any parameter.
object_name.name_declare()
When the above program code in Python is executed, we get the output as shown by the below screenshot.
Conclusion
Method overloading is a very crucial methodology offered by Python. The methodology is quite useful in complex situations, in which condition-based use and execution of certain parameters are needed. The methodology must be used appropriately, so as to avoid wrong use of any parameter.
Recommended Articles
This is a guide to Method Overloading in Python. Here we discuss the introduction, how method overloading works in Python? and examples. You may also have a look at the following articles to learn more – | https://www.educba.com/method-overloading-in-python/ | CC-MAIN-2020-24 | refinedweb | 942 | 58.08 |
I was looking for some more ideas to use to test wiringPi v2 and Amy Mather’s presentation at the Manchester Raspberry Jamboree gave me a great idea – implement John Conways Game of Life on the Pi using some GPIO expander chips and one of those 8×8 LED matrix displays. Amys solution involved an Arduino to drive the LED matrix, but I’ve used a board with 2 x MXP23017 GPIO expanders on it.
So this is my setup:
Hopefully Life enthusiasts will recognise the pattern in green dots on that display as a glider – and glide it does as you’ll see in the video below!
Hardware
The setup here has one of John Jays IO expansion boards and it was this that I was looking to test. I’ve already looked at one of his boards some time back but needed an opportunity to test this one – so this board has 2 x MCP23017 I2C GPIO expander chips on it, giving a grand total of 32-bits of GPIO. wiringPi sees this as just another 32 “pins”, no need to poke config registers, etc. with version 2…
The 8×8 display is bi-colour – Green and Red, so needs 24 bits of IO to make it work properly. As usual, I’m optimising the hardware by slightly increasing the complexity of the software, so I only have 8 resistors here on the commons to the bi-colour LEDs – that means I can only effectively light either the Red or the Green LED at any time. In-Theory I can further optimise the software by lighting up to 8 LEDs at a time, but I don’t.
One issue I have found here is the speed (or lack of it!) of the I2C bus… At the default of 100Kb/sec it starts to show up. I did manage to get the clock up to 750Kb/sec though, but faster than that and the I2C bus was unreliable – on the ‘scope it wasn’t rising back up to 3.3v. This may be a simple issue to do with the board though (the MCP23017’s are good to 1.7Mb/sec) so I2C bus speed may well be a factor when you’re looking to use it for your next project…
Here’s a short video of it in action:
You may notice some red leds – I changed the code slightly to show cells that had died in red (for one generation). Adds to the excitement (and I wanted to display the red LEDs too)
The surface is a torus, so it wraps round without any issues and I’m limiting the update rate to 10 a second here.
If you look at the code below you’ll see that the code to output bits on the IO expanders is nothing more than digitalWrite() operations. wiringPi v2 takes care of all the interpretation of the MCP23017’s registers, bits, controls, etc. leaving you to get on with the job of interfacing your hardware!
/* * life.c: *********************************************************************** */ #include <stdio.h> #include <string.h> #include <errno.h> #include <stdlib.h> #include <stdint.h> #include <wiringPi.h> #include <mcp23017.h> #define ROW_OFFSET 108 #define COL_RED 116 #define COL_GREEN 124 unsigned char matrix [8][8] ; PI_THREAD (matrixUpdater) { int row, col ; unsigned char data ; piHiPri (50) ; for (;;) { for (row = 0 ; row < 8 ; ++row) { digitalWrite (ROW_OFFSET + row, 1) ; for (col = 0 ; col < 8 ; ++col) { data = matrix [col][row] ; /**/ if (data == 0) continue ; else if (data == 1) // Green { digitalWrite (COL_GREEN + col, 0) ; delayMicroseconds (500) ; digitalWrite (COL_GREEN + col, 1) ; } else // Red { digitalWrite (COL_RED + col, 0) ; delayMicroseconds (500) ; digitalWrite (COL_RED + col, 1) ; } } digitalWrite (ROW_OFFSET + row, 0) ; } } return NULL ; } void setupMatrix (void) { int row, col ; // We need wiringPi setup in some way to make sure that // delay() works, but we don't need to be root to // use the I2C, so ... wiringPiSetupSys () ; // Add in the 2 x 23017 GPIO expander from base pin 100 mcp23017Setup (100, 0x20) ; mcp23017Setup (116, 0x21) ; // Set the pins up as we need it. // The first chip has the 2nd port addressing the rows for (row = 8 ; row < 16 ; ++row) { pinMode (100 + row, OUTPUT) ; digitalWrite (100 + row, 0) ; } // and the 2nd chip has the first port connected to the Greens // then Reds for (col = 16 ; col < 32 ; ++col) { pinMode (100 + col, OUTPUT) ; digitalWrite (100 + col, 1) ; } } /* * torus: * Do the coordinate wrapping for a torus ********************************************************************************* */ void torus (int *x, int *y) { if (*x < 0) *x = *x + 8 ; if (*x > 7) *x = *x - 8 ; if (*y < 0) *y = *y + 8 ; if (*y > 7) *y = *y - 8 ; } /* * neighbours: * Count our nighbours - this algorithm assumes the world is a torus ********************************************************************************* */ int neighbours (int x, int y) { int sx, sy ; int x1, y1 ; int count = 0 ; for (sx = x - 1 ; sx < x + 2 ; ++sx) { for (sy = y - 1 ; sy < y + 2 ; ++sy) { if ((sx == x) && (sy == y)) // don't count myself! continue ; x1 = sx ; y1 = sy ; torus (&x1, &y1) ; if (matrix [x1][y1] == 1) ++count ; } } return count ; } /* * updateLife: * Take our matrix and create the next generation ********************************************************************************* */ void updateLife (void) { int n ; int x, y ; char newLife [8][8] ; for (x = 0 ; x < 8 ; ++x) { for (y = 0 ; y < 8 ; ++y) { n = neighbours (x, y) ; /**/ if ((n == 0) || (n == 1)) // Die due to isolation if (matrix [x][y] == 1) // There was life newLife [x][y] = 2 ; else newLife [x][y] = 0 ; else if (n == 2) // 2 neighbours - stable newLife [x][y] = matrix [x][y] ; else if (n == 3) // 3 neighbours - new life (or same old life) newLife [x][y] = 1 ; else // 4 or more - die due to overcrowding newLife [x][y] = 0 ; } } // Copy new Life to the matrix memcpy (matrix, newLife, sizeof (newLife)) ; } /* ********************************************************************************* * The works ********************************************************************************* */ #undef Test #define Glider #undef Rpent #undef Toad #ifdef Glider char initial [64] = " * " " * " " *** " " " " " " " " " " " ; #endif #ifdef Test char initial [64] = "** " " " " " " ** " " ** " " " " " " " ; #endif #ifdef Rpent char initial [64] = " " " ** " " ** " " * " " " " " " " " " ; #endif #ifdef Toad char initial [64] = " " " *** " "*** " " " " ***" " *** " " " " " ; #endif int main (int argc, char *argv []) { int x, y ; setupMatrix () ; piThreadCreate (matrixUpdater) ; // Copy our initial setup to the 'matrix' for (x = 0 ; x < 8 ; ++x) for (y = 0 ; y < 8 ; ++y) if (initial [x + 8 * y] == '*') matrix [x][y] = 1 ; else matrix [x][y] = 0 ; delay (2000) ; for (;;) { updateLife () ; delay (100) ; } return 0 ; } | http://wiringpi.com/examples/more-testing-game-of-life/ | CC-MAIN-2018-17 | refinedweb | 1,028 | 62.55 |
When learning a new technology such as Ext JS it's complicated to find good.
This book will walk you through the very beginning, explaining to you why there are so many files when you download the library for the first time, and showing you the meaning of all the files and folders. We will learn when and how to use the library in every stage of the process of creating our first application and how we can make the components work together. We will also learn about architecture and how to use the Model-View-Controller (MVC) pattern in order to write maintainable and scalable code.
We will define layers to delegate specific responsibilities to each of them in order to have reusable code. Finally, we will learn how to prepare our code to deploy our application in a production environment, we will compress and obfuscate our code to be delivered faster. many more awesome stuff.
The company behind the Ext JS library is SenchaInc., they are working on great products that are based on web standards.
In this chapter, we will cover the basic concepts of this new framework. We'll learn how to import the library, the available tools to write our code, and we'll define the application that we'll build through the chapters of this book:
Should I use Ext for my next project?
Getting started with Ext JS
Our first program
Editors
Building an application suied for enterprise or intranet applications; it's a great tool to develop an entire CRM or ERP software. 4 came out with a great tool to create themes and templates in a very simple way. The framework for creating the. We can learn more about Compass on their own website at.
The new class system allows us to define classes incredibly easily. We can develop our application using the object-oriented programming paradigm and take advantage of the single and multiple inheritance. This is a great advantage because we can implement any of the available patterns such as the MVC, Observable, or any other. This will improve our code.
Another thing to keep in mind is the growing community around the library, there are lot Command, a tool that we can run on our terminal to automatically analyze all the dependencies of our code and create packages.
Documentation is very important and Ext JS has a great documentation, very descriptive with a lot of examples, and code so that we can see it in action right on the documentation pages, we can also read the comments from the community or even help by commenting and extending the API content.
We should know that Ext JS has a dual license option for us. If we want to develop an open source project, we need to use the GPLv3 license for our own project, this way we don't need to pay a license for Ext. But, if we want to develop a commercial project and we don't want to share our code with the world, we have to buy a license for Ext JS on Sencha's ebsite at.
Enough the content of each folder will be explained shortly.
We can also use the available Content Delivery Network (CDN) so we don't need to store the library in our own computer or server:
The CSS file:
The JavaScript file:
Before we start writing code we need to learn and understand a few concepts first. Ext JS is divided in.
The Ext Core layer contains the classes that manage the Document Object Model(DOM), setting and firing events, support for Ajax requests, and classes to search the DOM using CSS selectors.
Finally the Ext JS 4 layer contains all the components, widgets, and many more features that we're going to be learning in this book.
This is a natural question when you look at the downloaded files and folders for the first time, every file and folder is there for a purpose and now you're going to learn it:
The
buildfolder contains the descriptor files to create a custom version of the Ext JS library. In here, we can find the JSB3 files that describe the files and packages to build the library from the source code. These JSB3 files will be used by the JavaScript Builder utility that we will learn to use later in this book.
The
buildsfolder contains the minified version of the library; we find the foundation, the core, and the Ext JS sandboxed version of the library. The sandboxed version allows us to run Ext 4 and any older version of the Ext library on the same page.
The
docsfolder contains documentation of the API. Just open the
index.htmlfile and you're going to see the packages and classes with all the configuration, properties, methods and events available, guides and tutorials, links to watch videos online, and examples.
The
examplesfolder contains a lot of examples of the components, layouts, and small applications that are built to show what we can do with the library. Open the
index.htmlfile and explore the samples and demos by yourself. It's important to say that some of them need to run on a web server, especially those that use Ajax.
The
localefolder has the translations of 45 languages. By default the components are displayed in English, but you can translate them to any other language.
The
jsbuilderfolder contains the tool to build and compress our source code; the tool is written in Java and uses the YUI Compressor to improve file minification. The minification process allows us to create packages with all the classes and files that are needed in our application, this is an important step before deploying our application in production.
The
srcfolder contains all the classes of the framework. Each class is in its own file so we can read it easily, and every folder corresponds to the namespace assigned to the class. For example, the class
Ext.grid.Panelis in a file called
Panel.jsthat is in a folder called
grid(
src/grid/Panel.js).
The
resourcesfolder is where the styles and images are located; also we can find the Sass files to create our custom theme in here. Sass is an extension of CSS3 to improve the language; we can use variables, mixins, conditionals, expressions, and more with Sass.
ext-all.jsfile is the complete library with all the components, utilities, and classes. This file is minified so we can use it for a production environment.
The
ext-all-debug.jsfile is the same as the
ext-all.jsfile, but it is not minified so we can use this file to debug our application.
The
ext-all-dev.jsfile is similar to the
ext-all-debug.jsfile but contains additional code to show more specific errors and warnings at development time; we should use this file when developing; we should use this file only in developent environments.
The
ext-debug.jsand
ext-dev.jsfiles follow the same concept as mentioned with the
ext-allfiles. The
ext-debug.jsfile is an exact version of the
ext.jsfile but is not minified. The
ext-dev.jsfile contains extra code to log more specific errors in a development environment.
Now that we have a basic understanding of the downloaded files and folders we can go to the next step and get our hands on some code.
We need to setup our workspace to write all the examples of this book. Let's create a folder named
learning-ext-4. For now we don't need a web server to host our examples, but in the following chapters we are going to use Ajax; therefore, it's a good idea to use our favorite web server to host our code from these first examples.
In our new folder we are going to create folders that contain the examples for each chapter in this book. At this point we have a folder called
01-basics
thatcorresponds to this chapter and other called
extjs-4.1.1 that contains the Ext JS framework. Both folders are located at the same level.
Inside the
01-basics folder we're going to create a file called
installation.html
, where we need to import the Ext library and create a JavaScript file called
app.js that will contain our JavaScript code:
Let's open the
installation.html file in our favorite editor and type the following code:
<!DOCTYPE html> <html> <head> <meta http- <title>First program</title> <!-- Importing the stylesheet for the widgets --> < <!-- Importing the Ext JS library --> <script type="text/javascript" src="../extjs-4.1.1/ext-all-dev.js"></script> <!-- Importing our application --> <script type="text/javascript" src="app.js"></script> </head> <body> </body> </html>
Tip
Downloading the example code
You can download the example code files for all Packtjs-4.1.1/resources/css/ext-all.css, the second step is to import the whole library from
extjs-4.1.1/ext-all-dev.js. Now we're ready to write our code in the
app.js file.
Before we can start creating widgets we need to wait until the DOM is ready to be used. Ext JS provides a function called
Ext.onReady
, which executes a callback automatically when all nodes in the tree can be accessed. Let's write the following code in our
app.js file:
Ext.onReady(function(){ alert("We're ready to go!"); });_2<<
Tip
Feel free to use your favorite browser to work through the examples in this book. I recommend you to use Google Chrome because it has more advanced developer tools and it's a fast browser. If you are a Firefox fan, you can download the Firebug plugin, it's a powerful tool that we can use for debugging on Firefox.
If for some reason we can't see the alert message in our browser, it's because we haven't defined the correct path to the
ext-all-dev.js file. If we look at the JavaScript console, we'll probably see the following error:
Uncaught ReferenceError: Ext is not defined
That means that the
ext-all-dev.js file is not imported correctly. We need to make sure everything is correct with the path and refresh the browser again.
Now that we know how to execute code when the DOM is ready,let's send an alert message from the Ext library. Using the
Ext.Msg object we can create different types of messages such as an alert, confirmation, prompt, progress bar, or even a custom message:
Ext.onReady(function(){ //alert("We're ready to go!"); Ext.Msg.alert("Alert","We're ready to go!"); });
>>IMAGE(){ //alert("We're ready to go!"); Ext.Msg.alert("Alert","We're ready to go!"); Ex.Msg.confirm("Confirm","Do you like Ext JS?"); });
We use the
confirm method to request two possible answers from the user. The first parameter is the title of the dialog box and the second parameter is the question or message we want to show to the user:
>>IMAGE shall not see the confirmation dialog box anymore.
So far we have shown a confirmation dialog box requesting two possible answers to the user, but how can we know the user's response in order to do something according to the answer? There's a third parameter in the confirmation dialog box that is a callback function that will be executed when the user clicks on one of the two answers:
Ext.onReady(function(){ //alert("We're ready to go!"); Ext.Msg.alert("Alert","We're ready to go!"); Ext.Msg.confirm("Confirm","Do you like Ext JS?", function(btn){ if(btn === "yes"){ Ext.Msg.alert("Great!","This is great!"); }else{ Ext.Msg.alert("Really?","That's too bad."); } }); });
The callback function is executed after the user clicks on the Yes or No button or closes the confirmation dialog box. The function receives as a parameter the value of the clicked button which is Yes or No; we can do whatever we want inside of the callback function. In this case we're sending a message depending on the given answer. Let's refresh our browser and test our small program to watch our changes. Confirmations are usually asked when a user wants to delete something, or maybe when he/she wants to trigger a long process, basically anything that has only two options.
Before we move on it's important to use the right tools in order to be more productive when building applications. There are many editors we can use to write our code. Let's review some of them.
The
Sublime Text 2 editor is a very light and fast editor. We can add many plugins to have a powerful development environment. The team behind this software is actively working on new improvements and new features; we have updates almost every day (if we compile the editor from the source code at
github). The package manager for plugins works greatly to install new software from third-party developers.
We can use an evaluation version that never expires, but it's really worth buying the license for this great editor.
If we decide to work with this editor, we should use the JavaScript Lint plugin to validate our code every time we save our file (). Code completion is always welcome, we have a plugin for that too, which is available at. And, of course, a snippet package for writing common Ext JS 4 code can be foud at:
The
Eclipse editor is one of the most used editors out there. If you add the web tools plugin into the platform, you can get a JavaScript syntax validation, an Ext JS class
autocomplete, HTML and CSS validation. The downside of these tools is that they require a lot of resources from your computer, but if you have enough RAM, this is a good option for writing your code:
The previous screenshot shows the
autocomplete class in action. As you can see when you type
Ext you can select a class from the list. If you keep typing, it filters the classes that match your text.
You need to use the Spket plugin for adding the autocomplete functionality for Ext JS or any other JavaScript library. We can find the required steps to setup the Spketplugin at.
The Aptana editor is an IDE from Appcelerator. It's based on Eclipse but optimized for web applications. It's an open source project and free of charges.
Among other things, Aptana contains an autocomplete functionality for JavaScript and Ext JS, a JavaScript validator , CSS and HTML validator , a JavaScript debugger , Bundles , and so on:
Aptana is a great tool when working with Python, Ruby, or PHP as the backend of our projects. It contains tools to work with those out-of-the-box languages and also contains tools to deploy your application in the cloud using Heroku or Engine Yard.
The Textmate editor is a light and fast editor for Mac OS. It's not free, but it's worth what you pay for. Textmate doesn't have an autocomplete functionality like Aptana or Eclipse, but it contains bundles to automate some repetitive tasks such as creating classes, documenting methods, properties, and things like that. I suggest you download the availabe bundles for Ext JS from or create a custom bundle to automate these tasks:
to define components with a few clicks. We can create an Ext JS or Sencha Touch project. We can get a free trial from the official website of Sencha, we can also buy the license thee.
To start our application we need to create a project fo Ext 4. The Sencha Architect desktop application will show an empty canvas in the center, the available components and classes on the left-hand side, a project inspector and a configuration panel in the right-hand side:
Now let's create a simple application with this tool. First we need to drag a viewport component from the left-hand side panel into the canvas; the width and height are set automatically. The viewport always takes all the available space on our browser.
The next step is to add a Tab Panel section inside the viewport. To do this let's drag the component to the viewport. We can change the title of the tabs by clicking in the text and typing the new title; in this case we're going to set General, Groups, and Contacts for each tab. At this point we haven't set the height of the Tab Panel section; we can set the height in the Property panel at the right-hand side or by dragging the border of the panel. Another option is to set the layout property of the viewport to fit, this will expand the Tab Panel section to fit the viewport. Just select the viewport and in the Property panel, look for the layout property and select fit from the combobox:
Let's add a form to the General tab by dragging the form panel component to the tabs container. We will see an empty form with the title MyForm. Let's look for the title property in the Property panel at the right-hand side and delete the title. We can also set the border property to zero in order to get rid of the border.
So far we have an empty form in the General tab, let's add some text fields to the form by dragging them to the empty form. For this example let's set the label of these fields to Name and Last name. Now let's add a field date and a text area for the Birthdate and Comments field:
As we can see it's pretty simple and fast to create prototypes or interfaces with this tool. Now that we have our application let's see it in action, we just need to save our project and then click on the Preview button at the top bar. A prompt dialog box will appear asking us for the prefix of our application where our project is located. In here we can use, if we are using a web server. A new window of our default browser will open with a working example of the components that we have designed; we can play with it and see how it is working:
If we want to see the code that is generated, we need to go to the folder where we save our project. In there we will see all the files and classes that have been created for this small example.
The Sencha Architect desktop application is a great tool to build our interfaces. At the time of writing this book the application allows us to modify the code directly in the Architect, but this is not as easy as using a text editor or IDE.
Throughout the chapters of this book we're not going to use the Sencha Architect desktop application. I believe it's a good idea to understand how to create the components by code and once we know the basics we can use the Sencha Architect desktop application even better. If you are interested in using this tool, you should read the documenttion that is available online on Sencha's official website.
In this book we're going to be building a small invoice application. This will help us learn the most commonly used components in the library and the way they interact with each other. Most of the code and examples presented in this book are used to create the UI for the final application.
There are two approaches when we develop an application with Ext JS. We can use only a few widgets and render them within the HTML page in a specific
div element along with other things and load as many pages as the system needs just like any other traditional web application. This is the first approach.
The second approach is to create a single web page application and only load data using Ajax and REST to transfer our data using JSON or XML. There's no need to create multiple views or pages in our project. We only need one and then we will dynamically create the required Ext JS components.
In this book we will go for the second option. We're going to develop a single web page application, loading our data using JSON. We're going to compress and minify our source code using the Sencha tools in order to deploy our application in a production environment.
The downside of this approach is that we lost the browser history. When the user clicks on the back button of the browser he/she will be redirected to the last visited page. Gmail, Twitter, and some other sites that use this approach usually append a hash tag to the URL in order to simulate and keep track of the user history. Ext JS comes with a solution for these issues and we can take advantage of that by implementing the required utilities.
We're going to build an application to handle clients and categories of invoices. The client's module should display a list of clients with the basic information about them. The application should allow the user to see and edit the details of the selected client in the list.
The categories module should allow the user to manage the invoices categories in a tree. The user should create children nodes from any existing category. If the user deleted a category that has invoices, then the category and its invoices should be deleted as well. We will be able to drag-and-drop invoices to any of the categories on the tree. We will allow the user to re-order the categories by dragging them with the mouse.
Before we start coding we should define the requirements about how we are going to do our application using wireframes, so we can have a better idea of the final product.
Tip
A wireframe is just a sketch of how we will create the layout of our modules. I'm using a free tool at mockflow.com, but we can even use a piece of paper and pen.
First we define the main menu using a toolbar at the top of the viewport. This menu wil contain the modules that we need and also it will display the name of the logged user in the right-hand side of the toolbar. The viewport is a component that takesall the available space on the browser, therefore we should have only one viewport in our application:
For the main content we're going to use tabs to display each module. When the user clicks on any option of the main menu a new tab will be opened. The user should be able to close the tab when he/she finishes his/her tasks.
The Clients module, in the left-hand side, will contain a grid with the current clients in the application and in the right-hand side will be the form to create or edit a client.The module should look as the following screenshot:
When the user clicks on one of the clients of the grid, the form will be filled with the information of the selected client to be edited or deleted.
The New button will clear the form so the user can start filling the form with the information of a new client.
The Save button will submit the form data to the server in order to be saved. The server will create a new record or update the existing client in the database based on the sent ID.
The Delete button will remove the client from the database. Before sending the request the application should ask for confirmation. After the server responds with a success message, the form should be cleared and a feedback message should be displayed informing that the client has been deleted successfully. If an error occurs while deleting, an error message should be shown.
The second module manages the categories; we need to show a tree in order to display the categories correctly. Each time the user expands a category the application will make a request to the server asking for the children of the selected caegory:
The Add button will create a new node in the tree panel as a child of the selected category. If there's no selected category then the new category will become a child of the root node.
The Delete button will delete the category and all its children. For this action the application should ask for confirmation from the user.
The user will be able to drag-and-drop any of the invoices on the right-hand side panel to any of the categories on the left-hand side tree. We will also allow the user to re-order the categories on the tree by dragging them on the position they want.
We need to define our models and how they are related to each other. According to the given requirements we need only three models, each of them are described as follows, as well as their fields:
Client: This describes a client entity.
This model contains ID, name, contact, address, and phone fields.
Category: This is used to classify many invoices.
This model contains ID, owner, name, created at fields.
Invoice: This belongs to one category and describes an invoice entity.
This model contains ID, category ID, name, date fields.
The following diagram represents the relationships between the models:
Ext JS 4 came with a great data package. We can define each of the previous models in a class that represents each entity; for now we're just defining and preparing things to start developing our application.
We are going to use Ajax to load and send data. In order to start developing we're going to need a web server. I suggest you use Apache because it's easy to install using some of the all-in-all packages such as XAMP or WAMP. If you are using a Mac OS you already have an installed Apache server, just turn it on by going to the Settings and Sharing configurations.
It's important to note that programming the services that manage the requests on the server side, to save and delete the information in the database, are out of the scope of this book. We're going to focus our attention on Ext JS programming and defining the JSON format to transfer the data. You can use whatever server-side language you want such as PHP, Python, Ruby, and so on to deal with the database.
Once you have your web server up and running create a
learning-extjs-4 folder in the public content of your server. Inside that folder we will copy the Ext JS framework with all the files and folders that we have downloaded before. We are going to create a folder for each chapter of this book where we will create our examples. At the end we will have something like in the following screenshot:
In this chapter, we learned important concepts about the Ext JS library, such as the three layers in which the library is divided. We also learned the meaning of the files and folders that the Ext library has, importing the library, and the troubles we may have. We also have reviewed a few editors that will help us to be more productive in doing our job. Feel free to use your favorite editor.
Throughout the course of the book we're going to learn about the components and at the same time we're going to build an application using what we have defined in this chapter. In the next chapter, we're going to learn about how to create object-oriented JavaScript using the amazing class systems that come with the latest version of the Ext JS library; also we'll learn about DOM manipulation and events. | https://www.packtpub.com/product/learning-ext-js-4/9781849516846 | CC-MAIN-2020-50 | refinedweb | 4,687 | 69.92 |
04 November 2010 15:43 [Source: ICIS news]
LONDON (ICIS)--Polyethylene (PE) buyers in the ?xml:namespace>
The pound sterling was currently trading at £1.14 against the euro, while three months ago its value was at £1.21.
“I will count myself lucky if we can get away with a £50/tonne (€57/tonne) increase this month,” said one large buyer.
UK buyers were coy about giving actual price levels, but the difference between UK low density PE (LDPE) prices and those in mainland Europe were said to be as much as €90/tonne at the end of October.
The balance had begun to be redressed as November business got under way, but there was still a way to go, said some major producers.
Net LDPE prices were currently trading around €1,250-1,270/tonne FD (free delivered) NWE (northwest Europe) in mainland Europe, while the
“We will simply not deliver to the
Another said: “We have to do something. Converters on the continent are complaining of cheap imports from the
LDPE was the grade where the differentials were most pronounced, and where sellers were more likely to succeed in increasing prices. Availability was tight and demand was good.
Year-to-date figures for LDPE were very healthy, unlike its sister grade, linear low density PE (LLDPE), which was showing a decrease compared to 2009 figures. LDPE growth, however, was running at an astonishing 8% over 2009 in
LDPE producers were confident of a strong end to the year as few imported volumes would make their way into
($1 = €0.71) | http://www.icis.com/Articles/2010/11/04/9407530/uk-pe-buyers-face-hefty-hikes-in-november-due-to-weak-pound.html | CC-MAIN-2014-49 | refinedweb | 263 | 65.56 |
I implemented a capitalize after interpunctation. BUT how to implement
it, that the user can go BACK and delete the first word or character
because he wants to continue in lower-case?
KeyPress(object
sender, KeyPressEventArgs e){ if(EndOfSentence())
{ e.KeyChar = Char.ToUpper(e.Keychar); }}//private bool EndOfSentence()<
I need a batch file that looks at a source directory for any newly saved
files that have been updated with a Revision letter at the end of the
filename. Example: Original file: filename.xls, Newly saved file:
filenameA.xls I need the batch to copy the filenameA.xls and rename it to
another directory as filename.xls(without the rev letter). I'm not very
good with strings and batch files, but I
I have a long list of server host names and I need to pull the server
host names that contain a two letter abbreviated state followed by a three
letter abbreviated city.
For example:
server host
names:ohdubgh01sp,nyobg38djek,123ohdub123as,oh2kjd
This regular expression should pull the first 3 but
not the last.
What I have
I have a fasta file as shown below. I would like to convert the three
letter codes to one letter code. How can I do this with python or R?
>2ppoARGHISLEULEULYS>3ootMETHISARGARGMET
desired output
>2ppoRHLLK>3ootMHRRM
your suggestions would be
appreciated!!
What I have so far is this:
#include <iostream>#include <algorithm>using namespace std;int main(){ string genePool[16] = {"aa", "ab", "ac", "ad", "ba", "bb", "bc",
"bd", "ca", "cb", "cc", "cd", "da", "db",
"dc", "dd"}; string coco, code, deco, dede; int total =
0; for (int i = 0; i <
We have 2 databases that should have matching tables. I have an
(In-Production) report that compares these fields and displays them to the
user in an MS-Access form (continuous form style) for correction.
This is all well and good except it can be difficult to find the
differences. How can I format these fields to bold/italicize/color the
differences?
"The lazy dog
Here's a string that I may have:
(MyStringIsOneWholeWord
*)
I have used the following javascript regular
expression to get the text after the bracket if it starts with
My.
My
/(^|s|()+My(w+)/g,
The
problem with this is that it includes the first bracket in the result, as
that it is the letter/character that found
I want to make a function for checking valid username but I'm confused
what regular expression I need for this check?
username can
have [a-zA-Z0-9-_.]
[a-zA-Z0-9-_.]
but username must start with
letter like that -> [a-zA-Z]
[a-zA-Z]
username end
character may not be a special character just letter or number
username can have unlimited dot or da
This is my index.html.erb
<% for char in 'A'..'Z'
%> <a href="/pacientes?char=<%= char%>"><%=
char%></a><% end %>
And this is
my controller:
if params[:char].nil? @pacientes =
Paciente.allelsif @pacientes = Paciente.where("apellido1 = ?",
@char = params[:char])endr | http://bighow.org/tags/Letter/1 | CC-MAIN-2017-39 | refinedweb | 497 | 63.39 |
corba namespace in c++ vs java
- From: Tom Forsmo <nospam@xxxxxxxxxx>
- Date: Mon, 29 Oct 2007 14:41:57 +0100
Hi
I am having a problem with mapping a corba namespace between c++ and java.
I am currently implementing a server with a legacy corba interface originally written in c++. The server has several clients written in c++ and the new server I am writing is in java. The legacy idl interface, does not contain any modules, all its interfaces are defined without a namespace, except for some data types which are defined in a single module
e.g.
data.idl:
module MyData {
typedef sequence<string> strings;
...
}
methods.idl
interface MyMethods {
methodA(in MyData::strings);
methodB();
}
And here the namespace problem begins, since in java a module is equivalent to a package, which dictates a directory structure. While in c++ a module is just a namepsace which does not dictate a directory structure at all. What is normal in java is of course to have such code in a separate package called ex: services.corba. The problem of course is that all generated classes in java must then contain a "package services.corba;" statement. Since the legacy idl does not contain that namespace, there will be a mismatch.
If I dont include the java required modules in the idl, the generated stubs will not belong to a specific package. That is of course possible, but then it will be stored in the root source dir instead. That would really just make the source code a mess, when all other code is nicely packaged.
Another question is how namespace is interpreted by Corba during execution of a call. Normally such a call engine should check that namespaces in a call match the servers namespace for it to succeed. But does it do so in Corba or is namespaces just a internal issue of the part, i.e. client internal or server internal.
Anybody got any ideas on how to fix this?
regards
tom
.
- Follow-Ups:
- Re: corba namespace in c++ vs java
- From: ciaran . mchale
- Prev by Date: Re: corba reference
- Next by Date: Explicit (de)marshalling
- Previous by thread: general software architecture forums?
- Next by thread: Re: corba namespace in c++ vs java
- Index(es): | http://newsgroups.derkeiler.com/Archive/Comp/comp.object.corba/2007-10/msg00031.html | crawl-002 | refinedweb | 375 | 62.58 |
(Resend, since the mail from my other account seems to have been
dropped)
In the latest cvs2svn.py, vendor branches are implemented by, for every
change in the vendor branch (i.e. import), copying the modified file to
the trunk (unless of course there has been a 1.2 commit to trunk before
that import). Any previous revision of the file in trunk is deleted
first. Here's an example:
Node-path: trunk/winedefault.reg
Node-action: delete
Node-path: trunk/winedefault.reg
Node-action: add
Node-copyfrom-rev: 231
Node-copyfrom-path: /branches/winehq/winedefault.reg
as a result of this
revision 1.1.1.2
date: 2000/11/23 15:07:47; author: andrewl; state: Exp; lines: +7 -2
Import of Nov 23 winehq.
on the vendor branch, when the default branch is assumed to be 1.1.1.
But this delete-and-recopy scheme is undesirable for me. Can it be made
so that instead of deleting the existing trunk copy and copying anew,
the existing trunk copy is updated with the text of the new vendor
branch revision instead?
"Why?", you may ask. Well, to preserve trunk's history in the case where
someone has been abusing cvs admin -b and then need to teach cvs2svn
when and where that happened.
For example, for this winedefault.reg, "someone" first committed a
change to it on trunk.
revision 1.2
date: 2001/04/28 01:15:07; author: ovek; state: Exp; lines: +98 -3
Registry entry that it seems Black&White needs
That patch was also submitted to the upstream authors ("WineHQ") and
merged there. The next release from WineHQ incorporated the patch, and
then cvs imported into the vendor branch:
revision 1.1.1.6
date: 2001/05/11 08:36:53; author: ovek; state: Exp; lines: +3 -0
Import of Wine release 20010510.
Now that there was no difference between 1.2 and 1.1.1.6, someone had
the bright idea of "tidying up" by using
cvs admin -b1.1.1 winedefault.reg
which sets the default branch back to 1.1.1, the vendor branch.
That's the first situation that called for changing cvs2svn. It assumes
there are no trunk commits when the default branch is 1.1.1, so I first
broke that assumption by changing
# Ratchet up the highest vendor head revision, if necessary.
if self.default_branch:
to
# HACK: Don't trust default branch, only let the absence of 1.2 matter
if self.default_branch and 0:
(As far as I can tell, the else case is perfectly sufficient for every
case I could think of, so I didn't need the if case and disabled it.)
Fair enough. But the problems don't stop there. What happens on the next
import, now that the default branch is 1.1.1?
revision 1.1.1.7
date: 2001/08/29 18:12:14; author: ovek; state: Exp; lines: +81 -0
Import of Wine release 20010824.
You guessed it. No manual merge to trunk. The 1.2 revision stays there,
but for casual cvs users, it doesn't matter, since they check out
1.1.1.7 since the default branch is 1.1.1.
I can't ask you to make cvs2svn detect this. It's next to impossible,
particularly since there *is* a later 1.3 version and the default branch
is thus no longer 1.1.1. I've had to put the problematic revisions into
a list, and then make is_trunk_vendor_revisions return 1 when it
happens, like so:
vendor_commits = {
'winedefault.reg': ['1.1.1.7','1.1.1.8','1.1.1.9'],
}
def is_trunk_vendor_commit(default_branches_db, cvs_path, cvs_rev):
vendor_commit = vendor_commits.has_key(cvs_path) and cvs_rev in vendor_commits[cvs_path]
if vendor_commit: return 1
if default_branches_db.has_key(cvs_path):
...
(and then a lot of crud in define_revision to alert me when I might have
a case like that on my hands, so I can investigate whether or not it is
and thus need to add that revision to the vendor_commits structure)
So far so good. But then it turns out that this aforementioned
delete-and-recopy scheme whenever is_trunk_vendor_commit returns 1 kills
the idea, by destroying the trunk file's history (in the example, its
revision 1.2 commit and associated log message).
Node-path: trunk/winedefault.reg
Node-action: add
Node-copyfrom-rev: 1151
Node-copyfrom-path: /branches/winehq/winedefault.reg
So, whether you've read all the way here, or didn't bother with the
"why", how about a cvs2svn option to not delete-and-recopy in trunk, but
rather just change the trunk file's existing revision, preserving its
history?
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Fri Feb 20 05:59:07 2004
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2004-02/0590.shtml | CC-MAIN-2014-52 | refinedweb | 812 | 66.74 |
Teaching Your Computer
As I have written in my last two articles (Machine Learning Everywhere and Preparing Data for Machine Learning), machine learning is influencing our lives in numerous ways. As a consumer, you've undoubtedly experienced machine learning, whether you know it or not—from recommendations for what products you should buy from various online stores, to the selection of postings that appear (and don't) on Facebook, to the maddening voice-recognition systems that airlines use, to the growing number of companies that offer to select clothing, food and wine for you based on your personal preferences.. In "supervised learning", the computer is trained to categorize data based on inputs that humans had previously categorized. In "unsupervised learning", you ask the computer to categorize data on your behalf.
In my last article, I started exploring a data set created by Scott Cole, a data scientist (and neuroscience PhD student) who measured burritos in a variety of California restaurants. I looked at the different categories of data that Cole and his fellow eater-researchers gathered and considered a few ways one could pare down the data set to something more manageable, as well as reasonable.
Here I describe how to take this smaller data set, consisting solely of the features that were deemed necessary, and use it to train the computer by creating a machine-learning model.
Machine-Learning Models
Let's say that the quality of a burrito is determined solely by its size. Thus, the larger the burrito, the better it is; the smaller the burrito, the worse it is. If you describe the size as a matrix X, and the resulting quality score as y, you can describe this mathematically as:
y = qX
where q is a factor describing the relationship between X and y.
Of course, you know that burrito quality has to do with more than just the size. Indeed, in Cole's research, size was removed from the list of features, in part because not every data point contained size information.
Moreover, this example model will need to take several factors—not just one—into consideration, and may have to combine them in a sophisticated way in order to predict the output value accurately. Indeed, there are numerous algorithms that can be used to create models; determining which one is appropriate, and then tuning it in the right way, is part of the game.
The goal here, then, will be to combine the burrito data and an algorithm to create a model for burrito tastiness. The next step will be to see if the model can predict the tastiness of a burrito based on its inputs.
But, how do you create such a model?
In theory, you could create it from scratch, reading the appropriate statistical literature and implementing it all in code. But because I'm using Python, and because Python's scikit-learn has been tuned and improved over several years, there are a variety of model types to choose from that others already have created.
Before starting with the model building, however, let's get the data into the necessary format. As I mentioned in my last article and alluded to above, Python's machine-learning package (scikit-learn) expects that when training a supervised-learning model, you'll need a set of sample inputs, traditionally placed in a two-dimensional matrix called X (yes, uppercase X), and a set of sample outputs, traditionally placed in a vector called y (lowercase). You can get there as follows, inside the Jupyter notebook:
%pylab inline import pandas as pd # load pandas with an alias from pandas import Series, DataFrame # load useful Pandas classes df = pd.read_csv('burrito.csv') # read into a data frame
Once you have loaded the CSV file containing burrito data, you'll keep only those columns that contain the features of interest, as well as the output score:
burrito_data = df[range(11,24)]
You'll then remove the columns that are highly correlated to one another and/or for which a great deal of data is missing. In this case, it means removing all of the features having to do with burrito size:
burrito_data.drop(['Circum', 'Volume', 'Length'], axis=1, ↪inplace=True)
Let's also drop any of the samples (that is, rows) in which one or more values is NaN ("not a number"), which will throw off the values:
burrito_data.dropna(inplace=True, axis=0)
Once you've done this, the data frame is ready to be used in a model. Separate out the X and y values:
y = burrito_data['overall'] X = burrito_data.drop(['overall'], axis=1)
The goal is now to create a model that describes, as best as possible,
the way the values in X lead to a value in y. In other
words, if you look at
X.iloc[0] (that is, the input values for the first
burrito sample) and at
y.iloc[0] (that is, the output value for the
first burrito sample), it should be possible to understand how
those inputs map to those outputs. Moreover, after training the
computer with the data, the computer should be able to predict the
overall score of a burrito, given those same inputs.
Creating a Model
Now that the data is in order, you can build a model. But which algorithm (sometimes known as a "classifier") should you use for the model? This is, in many ways, the big question in machine learning, and is often answerable only via a combination of experience and trial and error. The more machine-learning problems you work to solve, the more of a feel you'll get for the types of models you can try. However, there's always the chance that you'll be wrong, which is why it's often worth creating several different types of models, comparing them against one another for validity. I plan to talk more about validity testing in my next article; for now, it's important to understand how to build a model.
Different algorithms are meant for different kinds of machine-learning problems. In this case, the input data already has been ranked, meaning that you can use a supervised learning model. The output from the model is a numeric score that ranges from 0 to 5, which means that you'll have to use a numeric model, rather than a categorical one.
The difference is that a categorical model's outputs will (as the name implies) indicate into which of several categories, identified by integers, the input should be placed. For example, modern political parties hire data scientists who try to determine which way someone will vote based on input data. The result, namely a political party, is categorical.
In this case, however, you have numeric data. In this kind of model, you expect the output to vary along a numeric range. A pricing model, determining how much someone might be willing to pay for a particular item or how much to charge for an advertisement, will use this sort of model.
I should note that if you want, you can turn the numeric data into categorical data simply by rounding or truncating the floating-point y values, such that you get integer values. It is this sort of transformation that you'll likely need to consider—and try, and test—in a machine-learning project. And, it's this myriad of choices and options that can lead to a data-science project being involved, and to incorporate your experience and insights, as well as brute-force tests of a variety of possible models.
Let's assume you're going to keep the data as it is. You cannot use a purely categorical model, but rather will need to use one that incorporates the statistical concept of "regression", in which you attempt to determine which of your input factors cause the output to correlate linearly with the outputs—that is, assume that the ideal is something like the "y = qX" that you saw above; given that this isn't the case, how much influence did meat quality have vs. uniformity vs. temperature? Each of those factors affected the overall quality in some way, but some of them had more influence than others.
One of the easiest to understand, and most popular, types of models uses the K Network Neighbors (KNN) algorithm. KNN basically says that you'll take a new piece of data and compare its features with those of existing, known, categorized data. The new data is then classified into the same category as its K closest neighbors, where K is a number that you must determine, often via trial and error.
However, KNN works only for categories; this example is dealing with a
regression problem, which can't use KNN. Except, Python's
scikit-learn happens to come with a version of KNN that is designed to
work with regression problems—the
KNeighborsRegressor classifier.
So, how do you use it? Here's the basic way in which all supervised learning happens in scikit-learn:
Import the Python class that implements the classifier.
Create a model—that is, an instance of the classifier.
Train the model using the "fit" method.
Feed data to the model and get a prediction.
Let's try this with the data. You already have an X and a y, which you
can plug in to the standard
sklearn pattern:
from sklearn.neighbors import KNeighborsRegressor # import classifier KNR = KNeighborsRegressor() # create a model KNR.fit(X, y) # train the model
Without the
dropna above (in which I removed any rows containing one or more
NaN values), you still would have "dirty" data, and
sklearn would be unable to proceed. Some classifiers can handle NaN
data, but as a general rule, you'll need to get rid of
NaN values—either to satisfy the classifier's rules, or to ensure that
your results are of high quality, or even (in some cases) valid.
With the trained model in place, you now can ask it: "If you have a burrito with really great ingredients, how highly will it rank?"
All you have to do is create a new, fake sample burrito with all high-quality ingredients:
great_ingredients = np.ones(X.iloc[0].count()) * 5
In the above line of code, I took the first sample from X (that is,
X.iloc[0]), and then counted how many items it contained. I then
multiplied the resulting NumPy array by 5, so that it contained all
5s. I now can ask the model to predict the overall quality of such a
burrito:
KNR.predict([great_ingredients])
I get back a result of:
array([ 4.86])
meaning that the burrito would indeed score high—not a 5, but high nonetheless. What if you create a burrito with absolutely awful ingredients? Let's find the predicted quality:
terrible_ingredients = np.zeros(X.iloc[0].count())
In the above line of code, I created a NumPy array containing zeros, the same length as the X's list of features. If you now ask the model to predict the score of this burrito, you get:
array([ 1.96])
The good news is that you have now trained the computer to predict the quality of a burrito from a set of rated ingredients. The other good news is that you can determine which ingredients are more influential and which are less influential.
At the same time, there is a problem: how do you know that KNN regression is the best model you could use? And when I say "best", I ask whether it's the most accurate at predicting burrito quality. For example, maybe a different classifier will have a higher spread or will describe the burritos more accurately.
It's also possible that the classifier is a good one, but that one of its parameters—parameters that you can use to "tune" the model—wasn't set correctly. And I suspect that you indeed could do better, since the best burrito actually sampled got a score of 5, and the worst burrito had a score of 1.5. This means that the model is not a bad start, but that it doesn't quite handle the entire range that one would have expected.
One possible solution to this problem is to adjust the parameters that
you hand the classifier when creating the model. In the case of any
KNN-related model, one of the first parameters you can try to tune is
n_neighbors. By default, it's set to 5, but what if you set it to
higher or to lower?
A bit of Python code can establish this for you:
for k in range(1,10): print(k) KNR = KNeighborsRegressor(n_neighbors=k) KNR.fit(X, y) print("\tTerrible: {0}".format(KNR.predict([terrible_ingredients]))) print("\tBest: {0}".format(KNR.predict([great_ingredients])))
After running the above code, it seems like the model that has the
highest high and the lowest low is the one in which
n_neighbors is
equal to 1. It's not quite what I would have expected, but that's why it's
important to try different models.
And yet, this way of checking to see which value of
n_neighbors is
the best is rather primitive and has lots of issues. In my next article, I plan to
look into checking the models, using more sophisticated
techniques than I used here.
Using Another Classifier
So far, I've described how you can create multiple models from a single classifier, but scikit-learn comes with numerous classifiers, and it's usually a good idea to try several.
So in this case, let's also try a simple regression model. Whereas KNN uses existing, known data points in order to decide what outputs to predict based on new inputs, regression uses good old statistical techniques. Thus, you can use it as follows:
from sklearn.linear_model import LinearRegression LR = LinearRegression() LR.fit(X, y) print("\tTerrible: {0}".format(KNR.predict([terrible_ingredients]))) print("\tBest: {0}".format(KNR.predict([great_ingredients])))
Once again, I want to stress that just because you don't cover the entire spread of output values, from best to worst, you can't discount this model. And, a model that works with some data sets often will not work with other data sets.
But as you can see, scikit-learn makes it easy—almost trivially easy, in fact—to create and experiment with different models. You can, thus, try different classifiers, and types of classifiers, in order to create a model that describes your data.
Now that you've created several models, the big question is which one is the best? Which one not only describes the data, but also does so well? Which one will give the most predictive power moving forward, as you encounter an ever-growing number of burritos? What ingredients should a burrito-maker stress in order to maximize eater satisfaction, while minimizing costs?
In order to answer these questions, you'll need to have a way of testing your models. In my next article, I'll look at how to test your models, using a variety of techniques to check the validity of a model and even compare numerous classifier types against one another. e-mail list is "KDNuggets" You also should consider the "Data Science Weekly" newsletter and "This Week in Data", describing the latest data sets available to the public.
I am a big fan of podcasts and. | https://www.linuxjournal.com/content/teaching-your-computer | CC-MAIN-2020-29 | refinedweb | 2,570 | 59.03 |
@mauricemeilleur You're seeing the representation of your shape as cubic beziers. The quads get converted to cubic because the underlying NSBezierPath doesn't support quadratic curves.
Posts made by justvanrossum
- RE: qCurve
- RE: qCur().
- RE: Drawing glyphs from ufo file.
@rafalbuchner This should work:
def drawGlyph(g): bez = BezierPath() g.draw(bez) drawPath(bez)
- RE: Cellular automaton posted in Code snippets
- RE: How to run Drawbot on a server: Does it is possible? posted in General Discussion
- RE: listFontGlyphNames() returning Alphabetical order?
())
- RE: Making new libraries
Alternatively you can park your module in
/Library/Python/3.6/site-packages/(If using Py 3.6.) If that folder doesn't exist, you can create it manually. DrawBot will find it.
- RE: Making new libraries).
- A grid of animated spirals
def spiral(cx, cy, diameter, angle): with savedState(): translate(cx, cy) scale(diameter / 1000) for i in range(100): rect(406, 0, 95, 95) rotate(angle) scale(0.99) gridSize = 100 margin = 50 canvasSize = 500 numFrames = 40 for frame in range(numFrames): t = frame / numFrames newPage(canvasSize, canvasSize) frameDuration(1/20) fill(0) rect(0, 0, canvasSize, canvasSize) fill(1) translate(margin, margin) for i in range(5): for j in range(5): a = 2 * pi * (t + (i + j) / 8) angle = 31 + 2 * sin(a) spiral(i*gridSize, j*gridSize, 95, angle) saveImage("~/Desktop/SpiralGrid.gif")
- RE: SSL Error (known SSL OSX issue?)
The link you reference indeed explains the problem. I tried the first solution but it doesn't help. I don't have that command file in my machine to try that.
- RE: Windows minimizing and re-expanding after completing script
@mauricemeilleur Hm, I've never seen this happen.
- RE: image Object should have imagePixelColor method
You can pass an ImageObject to
imagePixelColor():
im = ImageObject() with im: fill(1, 0, 0) rect(100, 100, 200, 200) print(imagePixelColor(im, (150, 140)))
- RE: SSL Error (known SSL OSX issue?)
@chuckloyola It fails for me in the same way, but also in Terminal. I have no idea why. It's more a general Python issue rather than one with DrawBot it seems.
- RE: can't fetch image from url (python3 version)
@chuckloyola Ok, good to know. Maybe it was a network glitch.
- RE: can't fetch image from url (python3 version)
@chuckloyola said in can't fetch image from url (python3 version):
hmm ... the latest DrawbotPy3 version on the website is 3.113
Oh I see, that's weird. I'll look into it.
- RE: can't fetch image from url (python3 version)
@chuckloyola And, just to make sure, you are still getting the same error on the Py3 version of DrawBot 3.114?
Regarding the gif error: I opened an issue about that:
- RE: can't fetch image from url (python3 version)
@chuckloyola This is actually a very different error, and I can reproduce it. (Something with placing an animated gif.)
What error do you get with the jpg link? The same as in your original report?
- RE: can't fetch image from url (python3 version)
@chuckloyola We're at 3.114 now, but I don't think it will make a difference. You mention Python 3. I somehow bet the same problem occurs with the Python 2 version of DrawBot.
- RE: Drawbot on something more powerful than desktops and laptops?
@mauricemeilleur Yes, to both questions. However, it requires you to have drawBot (and its dependencies) installed as an importable module. See the readme at
Once you have that set up, learn how to use command line arguments to pass info to your script:
python path/to/my/script.py 3
In the script:
import sys startFrame = int(sys.argv[1]) print(startFrame)
- RE: Saving large videos
@mauricemeilleur Right, sorry, I didn't notice that earlier thread before. | https://forum.drawbot.com/user/justvanrossum/posts | CC-MAIN-2019-04 | refinedweb | 628 | 67.04 |
{
I'm still trying different things with Twining. I'd thought about writing some "front end" type experience for usage but it really crystallized as a need when I showed it to a person I know and there seemed to be a disconnect in how it could be used. For me it's natural to set my path environment variable, launch favorite text editor X and then run things from the command line or a script, but it's a nuisance if you're used to a one stop shop for being able to use some tool. And as much as I want language as the focal point in the word "tool" there is something practical in the notion of something you download and click a button to execute with.
Enter Twy, which I pieced together after looking at a few samples of a hosted DLR engine in a Windows Forms app. Now one need not figure out how to install or configure anything, or worry about creating and disposing of script files.
If you want to write something that hosts the DLR engine, take a look first at these samples on Voidspace. There are other samples online if you hunt and peck but be aware that things have changed between the various releases of IronPython. A few gotchas for me:
1. Redirecting standard output:
// where engine references the ScriptEngine type
// and ms references a Stream of some sort
engine.Runtime.IO.SetErrorOutput(ms, Encoding.UTF8);
engine.Runtime.IO.SetOutput(ms, Encoding.UTF8);
Many examples of this are deprecated for the IronPython 2.x beta
2. Referencing classes in mscorlib:
Be aware that doing the following:
import clr
clr.AddReference("System")
is not going to be enough to get types out of mscorlib. Although types will load from System, you'll need to get a reference to the assembly directly if you plan to use it in your hosted engine. I had a little trouble with the StringBuilder but easily resolved it with the following after a tip on the IronPython mailing list.
Assembly assem = Assembly.GetAssembly(Type.GetType("System.Text.StringBuilder"));
scope = engine.CreateScope();
engine.Runtime.LoadAssembly(assem);
3. The only novel thing I did that I didn't see a lot of was loading a module so that you could utilize it with your hosted engine. I added Twining.py to the project and set Visual Studio to copy it to the compile destination. I then have the following code which keeps the module available for later use:
string p = Path.Combine(Environment.CurrentDirectory, "Twining.py");
scope = engine.Runtime.ExecuteFile(p);
// later on:
ScriptSource source =
engine.CreateScriptSourceFromString(input,
SourceCodeKind.Statements);
object res = source.Execute(scope);
All in all not rocket science, it's amazing how much power one has at their fingertips in such a small application. I would love to see other modules, especially ones that define some interesting type of DSL, have utilities like this that let you play around without much effort.
Oh yeah, the project and source. Download it here, I'll clean up a bit more later. | http://metadeveloper.blogspot.com/2008_04_01_archive.html | CC-MAIN-2018-22 | refinedweb | 514 | 63.49 |
You can convert data from one type to another in OmniMark in a number of ways:
%),
binaryoperator, and
baseoperator.
The following examples show how these facilities can be applied to a number of different data conversion problems.
You can convert a
string to an
integer value simply by using the
string value where an
integer is expected. OmniMark invokes a built-in conversion function to convert the
string value
to an
integer automatically. The
string used must contain only decimal digits.
process local string s initial { "6" } local integer i initial { 7 } set i to i + s output "i = " || "d" % i
OmniMark's floating point and BCD libraries provide conversion functions for converting from
strings
to
float and
BCD numbers respectively, so
strings are converted to
floats or
BCDs in the same way:
import "ombcd.xmd" unprefixed process local string s initial { "12.75" } local bcd x initial { "0.25" } set x to x + s output "x = " || "d" % x
You can convert an
integer to a
string expression using the
d
format item, as illustrated in the examples above.
The
d format item has many format modifiers that allow you to specify how the number is displayed. For
instance, to display a number as two hexadecimal digits, you would use the sequence
16ru2fzd. This sequence
means:
16r- display using radix (or base) 16—hexadecimal,
u- display using uppercase letters for digits over 9,
2f- display with a width of 2 digits,
z- pad the display with leading zeros, and
d- the
dformat item.
Thus the following code will print
FD:
process local integer i initial { 253 } output "16ru2fzd" % i
You can convert a
BCD value to a
string using the
BCD template formatting language.
For instance, the following code outputs
$5,729.95:
import "ombcd.xmd" unprefixed process local bcd total initial { 5729.95 } output "<$,NNZ.ZZ>" % total
To get the ASCII code (or EBCDIC code on machines that use it) for an individual character, you can use the
binary operator:
process local string s initial {"G"} output "The ASCII code for " || s || " is " || "d" % binary s || "."
To output the character that corresponds to an ASCII code, use the
b format item:
process local integer i initial { 71 } output "The character corresponding to ASCII code " || "d" % i || " is " || "b" % i || "."
You can convert non-base 10 numbers, represented as
strings, into
integers using the
base operator. For instance, this program converts the
string representation of a
hexadecimal value to an
integer:
process local string s initial { "7F" } local integer i set i to s base 16 output "d" % i
In some cases, OmniMark cannot tell which data format is intended when you provide a value of a different
type. In this case, OmniMark cannot call the appropriate conversion function and you must specify which type you
intended using a cast. A common example of this can occur when using overloaded operators. For example, the
BCD and
float libraries both provide
overloaded versions of the
+ operator
to work with
BCD and
float values respectively, and also to work with combinations of
BCD
or
float values with OmniMark's built-in types.
In the following example, an
integer value is a added to a
string value that expresses a
decimal fraction. The result is then assigned to a
BCD shelf item. Because overloaded
functions are selected based on the types of their arguments, and not on the type of their return values,
OmniMark sees this as the addition of an
integer with a
string. It then throws an exception
complaining that the
string
729.95 is not a
valid
integer.
import "ombcd.xmd" unprefixed process local integer i initial { 2 } local string s initial { "729.95" } local bcd x set x to i + s output "<$NNZ.ZZ>" % x
To force OmniMark to select the
BCD version of the
+ operator we must force at least one
of the terms to be evaluated as a
BCD value by using a cast:
import "ombcd.xmd" unprefixed process local integer i initial { 2 } local string s initial { "729.95" } local bcd x set x to i + bcd s output "<$NNZ.ZZ>" % x
If you create your own data types using
records, you may want to write conversion functions to convert
between those types and other types. In particular it is often useful to convert between user defined types and
strings. See conversion functions.
Here is a simple hexadecimal dump program that uses some of these conversion methods to print out side-by-side
ASCII and hexadecimal representations of a file. In the ASCII representation, unprintable characters are
represented by periods:
declare #main-input has binary-mode process submit #main-input find any{1 to 16} => chars local integer i repeat scan chars match [" " to "~"]+ => visible output visible match any output "." again output " " ||* 16 - length of chars repeat scan chars match any => char output " " || "16ru2fzd" % binary char increment i output " -" when i = 8 again output "%n" | http://developers.omnimark.com/docs-extract/html/concept/751.htm | CC-MAIN-2015-27 | refinedweb | 819 | 51.38 |
Traits for a class mapped with Wt::Dbo. More...
#include <Wt/Dbo/Dbo>
Traits for a class mapped with Wt::Dbo.
The traits class provides some of the mapping properties related to the primary key and optimistic concurrency locking using a version field.
See dbo_default_traits for default values.
The following example changes the surrogate id field name for a class
Foo from the default
"id" to
"foo_id":
Type of the primary key.
This indicates the type of the primary key, which needs to be
long long for a surrogate id, but can be any type supported by Wt::Dbo::field() (including composite types) for a natural primary key.
The following operations need to be supported for an id value:
std::ostream << id
id == id
id < id
Only the default
long long is supported for an auto-incrementing surrogate primary key. You need to change the default key type typically in conjuction with specifying a natural id, see Wt::Dbo::id().
The following example illustrates how to prepare a type to be usable as a composite id type:
Returns the sentinel value for a
null id.
When used as a foreign key, this value is used to represent a
null value.
Configures the surrogate primary key field.
Returns the field name which is the surrogate primary key, corresponding to the object's id.
You can disable this auto-incrementing surrogate id by returning
0 instead. In that case you will need to define a natural id for your class using Wt::Dbo::id().
Configures the optimistic concurrency version field.
Optimistic concurrency locking is used to detect concurrent updates by an object from multiple sessions. On each update, the version of a record is at the same time checked (to see if it matches the version of the record that was read), and incremented. A StaleObjectException is thrown if a record was modified by another session since it was read.
This method must return the database field name used for this version field.
You can disable optimistic locking using a version field all together for your class by returning
0 instead. | https://webtoolkit.eu/wt/wt3/doc/reference/html/structWt_1_1Dbo_1_1dbo__traits.html | CC-MAIN-2021-31 | refinedweb | 349 | 54.42 |
Microsoft Corporation
September 2003
Applies to:
Microsoft® ASP.NET
Microsoft Visual Basic® .NET
Summary: Learn about migrating from PHP to ASP.NET. Learn about the features, functionality and architecture of both systems. (24 printed pages)
Introduction
Architecture Comparison
Feature Comparison
Comparing Syntax and Common Tasks
Data Caching and Page Caching
Summary
Recommended Next Steps
Resources
This paper discusses the migration of PHP (PHP:Hypertext Preprocessor 4) to ASP.NET. It compares and contrasts the basic underlying syntax of PHP with Microsoft® Visual Basic® .NET, as well as the underlying functionality and architecture of the two systems.
While both PHP and ASP.NET allow you to build sophisticated and complex Web applications (like e-commerce sites, intranets, and corporate portals), PHP and ASP.NET have several major differences. Unlike PHP, ASP.NET is not a language or a parser but rather a set of technologies in the Microsoft .NET Framework for building Web applications and XML Web Services. Microsoft ASP.NET pages execute on the server like PHP and generate markup, such as HTML, WML or XML, which is sent to a desktop or to mobile applications. ASP.NET, though, is different in that it provides a robust, object-oriented, event-driven programming model for developing Web pages, while still maintaining the simplicity that PHP developers are accustomed to.
ASP.NET applications are based on a robust Object Oriented Programming (OOP) paradigm rather than a scripting paradigm. This allows for more rigorous OOP features, such as inheritance, encapsulation and reflection. While most basic and simple operations can easily be translated from PHP to ASP.NET, more complex applications will not be as simple to convert from PHP to ASP.NET and will require careful planning and consideration as well as a more OOP approach.
In this paper, we assume that the reader has experience with PHP as well as programming and software development in general. We begin this paper with a look at code with a short comparison of the underlying architectural differences and the OOP development model, followed by a feature comparison, and then a comparison of Syntax and Common tasks for developing Web applications with PHP and ASP.NET.
Note If you would like to skip the migration details, and simply test drive ASP.NET, feel free to jump to the Recommended Next Steps section.
As you will learn from the syntax and language comparison and the end of this paper, PHP and ASP.NET are relatively similar with analogous functionality and syntax. PHP, however, is very different from ASP.NET at a lower architectural level. PHP is based on a platform-independent processor/engine that parses PHP scripts and provides for database connections, Internet protocol compliance, and numerous other tasks common to most Web application platforms.
ASP.NET is a framework built upon a series of technologies such as the CLR and offers an extensive series of well-organized class libraries that provide for most every conceivable set of functionality that would be used in a Web application. It also allows for the easy and simple creation of components to extend the framework.
While PHP offers similar things, such as the PEAR libraries, PHP and ASP.NET are not truly analogous because the ASP.NET framework is built from the ground up on an OOP paradigm and OOP concepts; PHP is not. This difference is most apparent in the ways you access classes and objects in PHP and ASP.NET.
Both PHP and ASP.NET offer OOP paradigms to application development, but their support for various OOP concepts, such as encapsulation and polymorphism differ. For example, PHP only supports partial encapsulation (such as support for declaring methods and fields in the class) and partial polymorphism (no overloading, no abstraction). PHP also lacks support for such concepts and accessibility in that there is no concept of private, public, or protected functions in classes as well as the Overloading. While OOP purists may debate that ASP.NET and the various languages do not fully support every concept in the OOP paradigm, this is true of most languages considered OOP, such as C++ and Java.
This has both an upside and a downside. The downside is that for some Web developers there is a steeper learning curve for ASP.NET versus PHP, which offers a scripting paradigm that developers have traditionally employed for building Web sites. However, developers who have a background in OOP languages and/or Vwill find ASP.NET intuitively familiar and easy to learn.
The upside to ASP.NET's support of OOP concepts means that ASP.NET applications for the most part result in better designed code, have clear separation of content, logic, and data and thus are generally easier to support over the long term of a applications life cycle. In addition, ASP.NET's native support for enterprise technologies such as Message Queuing, Transactions (see the .NET Framework's System.EnterpriseServices classes) SNMP, and Web Services, makes it simple to develop highly scalable and robust applications.
You can find an introduction to the main areas of object-oriented programming (from a Visual Basic point of view) in Object-Oriented Programming in Visual Basic.
When a PHP page is requested, the HTML and inline PHP script is compiled to Zend Opcodes. Opcodes are low-level binary instructions that will be used to serve the PHP page. After compilation, the Zend Engine runs the opcodes (similar to the way Java's runtime engine runs byte code), and then HTML is generated and served to the client.
There are a number of commercial products that can be used to speed up the execution of a PHP page by optimizing these opcodes. Other ways to increase performance of PHP scripts include caching the opcode and caching the generated HTML.
When a request is made to IIS (Internet Information Services) or another Web server for an .aspx page (or any other extension supported by ASP.NET), the request is passed to ASP.NET for processing. If this is the first time the page has been requested, ASP.NET compiles the page to MSIL (Microsoft intermediate language). This MSIL code is then processed by the CLR (common language runtime) to machine code. Then the request is run using this compiled code. Subsequent requests are served from this same machine code assuming the page has not been modified.
It is important to note that the binary code that is generated by the CLR is already as optimized as possible; no add-on product is necessary to achieve maximum performance.
It is also important to note that everything in ASP.NET is compiled to machine code before being run. Even HTML text is converted to a string literal control and inserted in proper order into the control tree.
Table 1 presents a comparison of some of the prominent features in PHP and ASP.NET.
Table 1. Comparing the features of PHP and ASP.NET
The most popular tool is Visual Studio.NET which has full support for all .NET languages, database tools for creation of SQL and testing databases, Web Design tools, integration with version control, advanced debugging and numerous other features for a full list see the MSDN® Visual Studio Developer Center.
Other tools, including Borland C# Builder and Macromedia Dreamweaver MX, also support ASP.NET.
The following sections provide comparisons between PHP and .NET syntax as well as how to accomplish some of the more common programming tasks.
PHP allows you to insert comments in your code using C, C++ and Unix shell-style syntax, and anything within those comment indicators will not be executed.
In general, to comment out Visual Basic .NET code in ASP.NET you just need to use <%-- to open a comment block and --%> to close the block.
<%--
--%>
Code Sample 1 shows comments in each environment.
Code Sample 1. Server-side comments in PHP
/*
This is a block of text
That has been commented out
*/
Code Sample 1. Server-side comments in ASP.NET
<%--
This is a comment.
--%>
While PHP and Visual Basic .NET have similar language constructs, they are very different syntax for them. Since Visual Basic .NET is built upon an OOP model, variable declaration is much more rigorous than in PHP where a variable is declared simply by adding a dollar sign ($) before the variable name.
$
In Visual Basic .NET you declare a variable by specifying its name and characteristics. The declaration statement for variables is the Dim keyword. Its location and contents determine the variable's characteristics. Variables have levels such as local and module, data types, lifetimes and finally accessibility.
Dim
While this approach may seem more complex at first than variable assignment in PHP it actually makes a developer's life easier. ASP.NET focuses on helping developers build robust applications—and specifying data types makes tasks such as variable clean up, debugging, exception and error handling, and code maintenance much easier.
Code Sample 2 shows examples of declaring variables in each environment.
Code Sample 2. Variable declaration in PHP
$head_count
$foo
$X
$obj
Code Sample 2. Variable declaration in Visual Basic .NET
Dim head_count As Integer
Dim foo As String
Dim X As Date
Dim Obj As object
The AS clause in the declaration statement allows you to define the data type or object type of the variable you are declaring. You can specify any of the following types for a variable:
AS
Boolean
Long
Decimal
Label
TextBox
You can declare several variables of the same type in one statement without having to repeat the data type. In the following statements, the variables numStudents, numGTA and numProfessors are declared as type Integer:
numStudents, numGTA and numProfessors
numStudents
numGTA
numProfessors
Integer
Dim numStudents, numGTA , numProfessors As Integer
' All three are Integer variables.
For more information on data types, see Data Types. For more information on object-oriented programming, see Object-Oriented Programming in Visual Basic.
The lifetime of a variable is the period of time during which it is available for use. A local variable declared with a Dim statement exists only as long as its procedure is executing. When the procedure terminates, all its local variables disappear and their values are lost.
The concept of lifetime is extremely useful in that it allows developers to build applications with out having to concern them selves with many issues that occur in large-scale applications such as efficient memory management. By selecting the correct lifetime for a variable you can allow .NET to perform clean up operations on variables that are not being used.
For more information on lifetime, see Lifetime.
A local variable is one that is declared within a procedure (a procedure is analogous to a function). A non-local variable is one that is declared outside a procedure, but within a class or structure.
In a class or structure, the category of a non-local variable depends on whether or not it is shared. If it is declared with the Shared keyword, it is a shared variable, and it exists in a single copy shared among all instances of the class or structure. Otherwise it is an instance variable, and a separate copy of it is created for each instance of the class or structure. A given copy of an instance variable is available only to the instance for which it was created.
Shared
The scope of a variable is the set of all code that can refer to it without qualifying its name. A variable's scope is determined by where the variable is declared. Code located in a given region can use the variables defined in that region without having to qualify their names. When declaring scope, the following rules apply:
However, if you declare a local variable within a block, its scope is that block only. A local variable is active within the defining control block. The control block can be a procedure, an if statement, a loop statement and so on.
For more information on scope, see Scope.
.NET supports the idea of accessibility to variables, which allows you, the developer to control what code can access specific variables. For example if you wanted to set some constants for a formula and make sure that your constant never gets changed by other code outside of its class you could declare that variable private like this:
Private myConstant As Integer
A variable's accessibility is determined by which keyword or keywords—Dim, Public, Protected, Friend, Protected Friend, or Private—you use in the declaration statement. In general you will only use public and private in your development.
Dim, Public, Protected, Friend, Protected Friend,
Private
You can declare a module, structure, class, or instance variable with any of these keywords. Within a procedure, only the Dim keyword is allowed, and the accessibility is always private.
The typical way of outputting data in PHP is through the echo() language construct. The closest analogue to this in ASP.NET is the Response.Write() method, or the <%= %> construct, which is simply shorthand for Response.Write(). Code Sample 3 shows basic syntax for writing a value to the page.
echo()
Response.Write()
<%= %>
Response.Write()
Code Sample 3. Basic output in PHP
<?php
$hello = "hi how are you\n";
echo $hello;
?>
Code Sample 3. Basic output in Visual Basic .NET
<%
Dim Hello As String = "Hi how are you" & vbcrlf
Response.Write(Hello)
%> than PHP.
<script language="VB" runat="server">
Sub Page_Load(sender As Object, e As EventArgs)
TheDate.Text = DateTime.Now
End Sub
</script>
The current date is: <asp:Label
This example declares a server-side Label control called TheDate,.
TheDate
Load
Text
span
PHP has several conditional processing expressions such as for, while, switch, and foreach but the most common is the if/else expression. Visual Basic .NET has very similar constructs with similar syntax. Code Sample 4 provides a comparison of equivalent conditional logic in PHP and Visual Basic .NET.
for
while
switch
foreach
if/else
Code Sample 4. Basic conditional logic in PHP
if ($a > $b) {
print "a is bigger than b";
} elseif ($a == $b) {
print "a is equal to b";
} else {
print "a is smaller than b";
}
Code Sample 4. Basic conditional logic in Visual Basic .NET
If a > b
Response.Write ("a is bigger than b")
ElseIf a = b Then
Response.Write ("a is equal to b")
Else
Response.Write ("a is smaller than b")
End If
Switch statements are common language constructs for most programming languages when you wish to test a single expression for multiple values. They are commonly used to replace if statements that contain multiple elseif/else blocks.
if
elseif/else
Code Sample 5 shows a comparison between PHP's switch statement and Visual Basic's Select Case statement.
switch
Select Case
Code Sample 5. A switch statement in PHP
switch
switch ($i) {
case 0:
print "i equals 0";
break;
case 1:
print "i equals 1";
break;
case 2:
print "i equals 2";
break;
default:
print "i is not equal to 0, 1 or 2";
}
Code Sample 5. A Select Case statement in Visual Basic .NET
Select Case
Select Case Number i
Case 0
description = "0"
Wesponse.Write ("i equals 0")
Case 1
description = "1"
Response.Write ("i equals 1")
Case 2
description = "2"
Response.Write ("i equals 2")
Case Else
description = " i is not equal to 0, 1 or 2"
Response.Write ("i is not equal to 0, 1 or 2 ")
End Select
Another extremely common control construct is the loop. There are several different commonly recognized types of loops that both PHP and .NET support.
Code Sample 6. For loop in PHP
For
for ($i = 1; $i <= 100; $i++) {
print $i;
}
Code Sample 6. For loop in Visual Basic .NET
Dim sum As Integer = 0
Dim counter As Integer
For counter = 1 To 100 Step 5
sum += counter
For i = 1 To 100
Response.Write (i)
Next I
In Visual Basic, this type of loop is known as a For...Next loop, and in PHP it is simply called a For loop. In this example, the += operator is used as shorthand for sum = sum + counter. In PHP one may break out of a loop with the Break; statement. A For...Next loop can be broken out of with the Exit For statement.
For...Next
For
+=
sum = sum + counter.
sum = sum + counter
Break;
Exit For
A conditional loop iterates over a set of instructions as long as a condition evaluates to true. Code Sample 7 shows an example of a basic conditional loop in each language.
true
Code Sample 7. Conditional loop in PHP
$i = 1;
while ($i <= 10):
print $i;
$i++;
endwhile;
Code Sample 7. Conditional loop in Visual Basic .NET
Dim counter i As Integer = 1
Do While counter i <= 10
Response.Write(i)
counter i += 1
Loop
In Visual Basic, this type of loop is known as a Do...Loop statement, or a while loop. PHP also supports Do...While loops, which are very similar to while loops, except the truth expression is checked at the end of each iteration instead of ends immediately.)
Do...Loop
while
Do...While
FALSE
Here is an example of this in PHP:
$i = 0;
do {
print $i;
}
while ($i>0);
This loop runs exactly one time, since after the first iteration, when the truth expression is checked, it evaluates to FALSE ($i is not bigger than 0) and the loop execution ends.
$i
In Visual Basic .NET, you can do much the same thing:
Dim counter i As Integer = 0
Do
Response.Write (i)counter
Loop While counter i > 0
However, Visual Basic .NET supports a built-in looping capability that PHP does not support, which is to evaluate a condition until it is true.
PHP 4 (not PHP 3) includes a foreach construct, much like ASP.NET and some other languages. This simply gives an easy way to iterate over arrays. foreach works only on arrays and will issue an error when you try to use it on a variable with a different data type or on uninitialized variables. In Visual Basic .NET, the equivalent is the For Each...Next statement. Code Sample 8 shows an example of looping over an array in each language.
For Each...Next
Code Sample 8. foreach loop in PHP
foreach
$i = 0;
foreach($a as $v) {
print "\$Key[$i]$v \n";
$i++;
}
Code Sample 8. For Each loop in Visual Basic .NET
For Each
For Each v In a
Response.Write a(v) (v & vbcrlf)
Arrays in PHP function very differently than arrays in Visual Basic .NET. In PHP, arrays are actually associative arrays but can be used like index or associative arrays. In Visual Basic .NET arrays are index arrays. Visual Basic .NET does not support associative arrays as such (although you can build them your self—see collections below). Other ASP.NET languages do support these types of arrays but Visual Basic .NET does not, and this can provide some challenges to PHP developers who are not used to working with the more common index array and might wish to model an associative array in Visual Basic .NET. Code Sample 9 shows an example of a simple array in PHP and Visual Basic .NET.
Code Sample 9. Example of array creation in PHP
$a = array (0,1,2);
Code Sample 9. Example of array creation in Visual Basic .NET
Dim MySingleArraya() As Integer = New Integer (2) {0,1,2}
In a Visual Basic .NET array, variables are declared the same way as other variables, using the Dim statement. You follow the variable name with one or more pairs of parentheses to indicate that it is an array rather than a scalar (a variable containing a single value).
Dim
Visual Basic .NET arrays also.
Object
Visual Basic .NET arrays can be either nested arrays-of-arrays, or multi-dimensional arrays. There are a variety of functions to manipulate arrays that are comparable to PHP, with one exception. Because Visual Basic .NET has no notion of associative arrays, there are no functions to access or index or doing anything by an array's "Key," which does not exist in Visual Basic .NET.
While we have made several references to Visual Basic .NET not supporting associative arrays, it is possible to create what is called a collection as an alternative to an array. Collections work somewhat like associative arrays in that they can be used to solve similar problems.
In some circumstances, it can be more efficient to store items in a collection than in an array.
You might want to use a collection if you are working with a small, dynamic set of items To create a collection all you need to do is declare and instantiate a Collection variable as shown in the sample code below:
Collection
Dim myCollection As New Collection()
You then can use the Add method to add members to the collection. In this example, we create four strings and add them to the collection. A unique String value may optionally be added as the key for the members of your collection. This value is passed to the collection as the second argument of the Add method.
Add
String
Dim w, x, y, z As String
w = "key1"
x = "key2"
y = "key3"
z = "key4"
myCollection.Add(w, "1")
myCollection.Add(x, "2")
myCollection.Add(y, "3")
myCollection.Add(z, "4")
While this may seem a lot like creating an associative array in PHP, a collection is a very different animal in that it is an object in and of itself. For PHP developers moving to ASP, it is recommended that they look at the Microsoft Visual Basic .NET Language Specification before trying to model associative arrays in Visual Basic .NET.
A common task in any Web application is the management of state, which is usually done using cookies or an application state management construct such as Session variables. Visual Basic .NET has similar methods to PHP for handling state.
Setting cookies in both environments is relatively trivial. Code Sample 10 shows an example of writing and then reading a cookie in each language.
Code Sample 10. Setting and retrieving cookies in PHP
<?php
$value = 'something from somewhere';
setcookie ("TestCookie", $value,time()+3600); /* expire in 1 hour */
?>
/* and to retive the set cookie */
<?
echo $_COOKIE["TestCookie"];
?>
Code Sample 10. Setting and retrieving cookies in Visual Basic .NET
Dim value as string = "something from somewhere"
Dim myCookie As New HttpCookie = New HttpCookie("Something from somewhereTestCookie")
Dim now as DateTime = DateTime.Now
myCookie.Value = now.ToString()value
myCookie.Expires = now.AddHour(1)
Response.Cookies.Add(myCookie)
'and to retrieve the set cookie
Response.Write(Request.Cookies["What we setTestCookie"].Value)
Session variables in ASP.NET are very similar to PHP session variables. Session variables in both environments provide handling and cookie manipulation for you to provide persistence through a Web application visit.. Code Sample 11 shows some examples of session variable usage.
System.Object
Code Sample 11. Session variable usage in PHP
<?PHP
session_start();
session_register('today');
$today = getdate();
?>
<?= $today ?>
Code Sample 11. Session variable usage in Visual Basic .NET
Session("Today") = DateTime.Now
Dim today As Date
today = CDate(Session("Today"))
Response.Write(today)
Response.Write (session("Today"))
ASP.NET also has another form of state management called Application State that is analogous to session variables but persists for the lifetime of an application. This allows you to store various things, such as configuration information or database connection strings that would not change while the application is running.
For more information on this subject check out the Application State section of the .NET Framework Development Guide.
ASP.NET supports most of the popular features of other regular expression implementations such as those in Perl and awk. It is designed to be compatible with Perl 5 regular expressions. ASP.NET also supports regular expression features not yet seen in other implementations, such as right-to-left matching and on-the-fly compilation. Since ASP.NET is compatible with Perl regular expressions and since most PHP developers use Perl-compatible regular expressions, there is usually no need for translating your syntax from one form to the other. For more information about .NET regular expression support, see .NET Framework Regular Expressions.
The ASP.NET framework includes support for structured exception handling following familiar language constructs Try/Catch, which provides the ability to catch exceptions that may arise in code. This is something that is missing in PHP and will be added in PHP 5.
Try/Catch
Below is an example of how this is done in Visual Basic .NET:
Try
' code that might cause an error here
Catch e As ExceptionType
' code to handle the error
' Optional: More Catch blocks here
Finally
'.
Try
Catch
Finally
Try...Finally
In PHP there are generally two common ways to access a database: by using a database specific extension or by using the database independent PEAR DB library.
In ASP.NET, database access is performed through a set of objects known as ADO.NET, which serves much the same function as the PEAR DB library. that are optimized to provide high-performance access to each of those specific databases. Third parties also provide support for other databases such as MySQL. The examples in this section will use the SQL Server objects, as that is one of the most-commonly used databases with ASP.NET.
System.Data, System.Data.SqlClient, and System.Data.oledb are the namespaces that define data base access in ADO.NET. To give your page access to the classes, you need to import the System.Data and System.Data.SqlClient namespaces into your page.
System.Data
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
Code Sample 12 shows an example of executing a query in each language. With PHP we have shown a connection using PEAR, which is not only one of the most popular methods of connecting to a DB, it is the most analogous to ADO.NET.
Code Sample 12. Executing queries in PHP
<?php
//connect
require_once('DB.PHP');
$db=DB::connect ("mysql://mydbvie w:user@localhost/mydb");
if (DB::iserror($db)) {
die($db->getMessage());
$sql = "select * from mytable";
$q= $db->query($sql);
if (DB::iserror($q)) {
die($q->getMessage());
}
<tr>
<td><?= $row[0] ?></td>
<td><?= $row[1] ?></td>
<td><?= $row[2] ?></td>
</tr>
Code Sample 12. Executing queries in Visual Basic .NET
<script runat="server">
Sub Page_Load(Sender As Object, E As EventArgs)
Dim myConnection As New SqlConnection("server=(local)\NetSDK;database=mydb;Trusted_Connection=yes")
Dim myCommand As New SqlDataAdapter("select * from mytable", myConnection)
Dim ds As New DataSet()
myCommand.Fill(ds, "myDataset")
myDataGrid.DataSource = myDataset
myDataGrid.DataBind()
End Sub
</script>
<%--- outputting the resutlt ---%>
<form runat="server">
<asp:DataGrid
</form>
In PHP, your query result is stored in a variable called a result set, while in ADO.NET it is called a Dataset object. result set is a read-only view of the data returned while the .NET Dataset is actually an in-memory and read-write view of the data allowing .NET developers to easily manipulate data returned from a data source.
When outputting data, ASP.NET offers several methods for display of data to the user or client. The first is similar to PHP, which is to loop through the result set using a SQLDataReader object to write out the data we wish to display from the query. The more common way, which does not have an analog in PHP, is ASP.NET's data binding. This allows developers to build User Interface and Display controls that can be used and reused throughout an application and allows for greater abstraction of display from data and logic. Data binding's flexible syntax allows you to bind not only to data sources, but also to simple properties, collections, expressions, and even results returned from method calls.
To use data binding, you need to assign some data source such as query results to the DataSource property of a data-aware server control (such as the DataGrid). Optionally, you can provide some additional formatting information for each column and call the DataBind() method. The server control will take care of the rest.
DataSource
DataGrid
DataBind()
For example, in Code Sample 12 we used the data binding syntax to output the result of our query like this:
<%--- In page load event
---%>
myDataGrid.DataSource = myDataset
myDataGrid.DataBind()
<%--- outputting the resutlt ---%>
<asp:DataGrid
Data-aware server controls will provide additional functionality, such as support for paging or in-line editing of the data being displayed. For more information and examples, please refer to Data Binding Server Controls. environments provide support for both caching strategies; however, ASP.NET has many more methods for caching and managing data than PHP allowing developers to pick which method and strategy suites the needs of their application performance need.
Caching the HTML output of a page request is a common method of reducing load on a Web application. PHP does not natively support Page Caching, but it can be performed programmatically or by downloading third party packages. Usually page caching is performed on the server in numerous ways, from caching the compiled code to actually writing out the output of the page to a separate file that is updated whenever the code is updated.
In ASP.NET, page caching can be performed via either the low-level OutputCache API or the high-level @ OutputCache directive. When output caching is enabled, an output cache entry is created on the first GET request to the page. Subsequent GET or HEAD requests are served from the output cache entry until the cached request expires.
OutputCache
@ OutputCache
GET
HEAD
The output cache respects the expiration and validation policies for pages. If a page is in the output cache and has been marked with an expiration policy that indicates that the page expires 60 minutes from the time it is cached, the page is removed from the output cache after 60 minutes. If another request is received after that time, the page code is executed and the page can be cached again. This type of expiration policy is called absolute expiration—a page is valid until a certain time.
In addition to output caching an entire page, ASP.NET provides a simple way for you to cache just specific portions of a page, which is called more information about ASP.NET's output caching, see Caching ASP.NET Pages.
There are a variety of ways to cache query results in PHP programmatically but not natively to the environment. Building Data Caching classes or systems in PHP can be done simply for small amounts of information with Session variable and/or cookies or for larger and more complex information by building your own Data caching classes. The problem is that when you are working with large types of complex data this can be inefficient, error prone, and somewhat complex to program.
ASP.NET offers a system-wide method for caching data (DataSets, arrays, collections, XML objects, and so on.) through the Page.Cache object. For applications that need more sophisticated functionality, ASP.NET cache supports three specific types of cache: expiration, scavenging, and file and key dependencies.
Page.Cache
ASP.NET Data Caching provides programmers many different methods to manage their applications and make them more responsive and efficient. For more information, see the Cache Class documentation for the Cache object.
Cache
Both PHP and ASP.NET have built in support for doing e-mail programmatically. To send e-mail with ASP.NET in this example you need to setup the IIS SMTP service, which must be installed because the built-in mail objects in .NET depend on objects included with the service. .NET though allows you to work with any SMTP server or mail server like PHP. Code Sample 13 compares basic syntax of each environment.
Code Sample 13. Sending e-mail in PHP
$to = "test@atnoaddress.com";
$from = "me@nosuchaddress.com"; $subject = "hi";
$message = "just wanted to say hi";
mail($to,$subject,$message, $from)
Code Sample 13. Sending e-mail in Visual Basic .NET
Dim myMail As MailMessage = New MailMessage()
myMail.From = "me@nosuchaddress.com"
myMail.To = "test@atnoaddress.com"
myMail.Subect = "hi"
myMail.Body = "just wanted to say hi"
SmtpMail.Send(myMail)
Built-in support for parsing and manipulating XML in PHP is rather poor. While developers can use it for parsing and traversing XML, it lacks support for DOM parsing, which while slower than PHP's SAX parser, is much easier to work with. PHP also does not support the ability to natively validate XML documents against a DTD or XML SCHEME, and PHP does not support XSL/XSLT, as well as numerous other technologies that are common to many Web application products on the market. While there are numerous PHP packages that allow PHP to accomplish many XML-related tasks .NET and ASP.NET have extensive built-in support for working with XML. XML is one of the technologies at the heart of the .NET platform. You can learn more about Web Services by reading How ASP.NET Web Services Work.
The .NET Framework has extremely comprehensive support for all XML recommendations as defined by the W3C and supports XSL/XSLT, XPath, XQuery, as well as a host of other technologies, such as UDDI, WSDL, SOAP for Web services.
While it is possible to create XML-RPC type mechanisms in PHP, it is much harder to create Web services, which allow developers to exchange data and procedures using common protocols and standards to provide for discovery, data binding, and description. .NET has extensive support for Web services and related technologies, such SOAP, WSDL and UDDI. .NET also makes the creation and development of Web services a trivial matter for the developer. For example, here is code to create a simple hello world Web service:
<%@ WebService Language="VB" Class="HelloWorld" %>
Imports System
Imports System.Web.Services
Public Class HelloWorld :Inherits WebService
<WebMethod()> Public Function SayHelloWorld() As String
Return("Hello World")
End Function
End Class
The .NET Framework SDK allows you to generate your proxy classes using the command-line Web Services Description Language tool (WSDL.exe). To create a proxy class called HelloWorld.cs for the above example, you could enter:
WSDL.exe
HelloWorld.cs
WSDL.
SayHelloWorld
From the client perspective, the code would be simple, as shown in the following example:
Dim myHelloWorld As New HelloWorld()
Dim sReturn As String = myHelloWorld.SayHelloWorld()
And that's all there is to creating a simple Web service. For more information on XML in general and Web services in specific, you can go to Employing XML in the .NET Framework.
A migration from PHP to ASP.NET in most cases is not very complex for simple to small applications. Due to underlying architectural differences as well as ASP.NET's OOP paradigm, more sophisticated and complex applications need to be planned and well thought out to take advantage of ASP.NET's more rigorous separation of display from logic and data, as well as time saving built in functionality that significantly reduces the amount of code necessary to do comparable tasks.
Now that you've had an introduction to ASP.NET, try the following: | http://msdn.microsoft.com/en-us/library/aa479002.aspx | crawl-002 | refinedweb | 5,844 | 57.16 |
odbc
> recent posts
import dbf to sql
Posted by samir shah at 11/1/2007 1:56:56 AM
Hi All, I need help to import dbf files into sql or msde database The dbf file is currently managed by sql server Please suggest how can we import data from dbf file into sql or msde Regards Samir Shah ...
more >>
Connection to SQL using domain
Posted by SQLQuestion at 10/31/2007 3:03:38 PM
Is it possible to open a SQL Server connection using domain credentials that are not your own? In ODBC dialog, you have the option for Windows Authentication (your own domain account - this works, but not what I want) or SQL Authentication. In SQL Server, I have a login created as a domain...
more >>
SQL Server 2000 not showing up in list of Servers
Posted by kps_boise at 10/31/2007 9:57:01 AM
I have both SQL2005 and SQL2000 installed in the same machine. When I am trying to configure a ODBC connection to SQL2000 database, using SQL2000 driver, I do not see the SQL2000 in the list of servers. Please let me know how I can make it available in the list of servers....
more >>
Re: MS-Access and SQL 2005 Application Roles
Posted by Mary Chipman [MSFT] at 10/31/2007 12:00:00 AM
Application roles don't work well from Access because the ODBC driver opens additional connections under the covers that do not have the application role activated. -Mary On Wed, 24 Oct 2007 09:19:01 -0700, Dedge <Dedge@discussions.microsoft.com> wrote: >I have created an Application Ro...
more >>
install ODBC connections with login scripts or GPO
Posted by totoro at 10/30/2007 12:56:03 PM
Is there any way to add our standard System DSN or equiv, ODBC connections on XP clients via scipt or GPO? THis cannot be the first time this question has been asked but I cannot find a satisfactory answer. Thanks in advance!...
more >>
T00’
Posted by Rui Oliveira at 10/29/2007 8:11:00 AM
In some computers, just a few in thousands, appears the following message when try to login in SQL. Connection failed: SQLState: ‘S1T00’ SQL Server Error: 0 [Microsoft][ODBC SQL Server Driver]Timeout expired I am using a ODBC string to connect to DB. This happened in two computers, ...
more >>
Slow MS Access 2003 application When accessing SQL Server 2005
Posted by Ben at 10/28/2007 7:55:19 PM
An update on this issue, I am still experiencing this slowness problem. I have tried from executing the maintenance processes, to checking the codes of the MS Access 2003 application. One new thing that we found is that if we access the MS Access application from one of the workstations vi...
more >>
Relationship between odbc driver with .net data provider
Posted by kashif ali at 10/26/2007 12:00:00 AM
i just want to ask quite basic question that what when data is commuicated between DB and application , where odbc driver plays its role, if we use ..net provider for odbc or u can place this question as whats the relation between .net provider and odbc driver? Its quite simple question ur ans...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
Unable to start TSQL Debugging. Could not attach to SQL Server Process on 'srvname'. The RPC server is unavailable.
Posted by DR at 10/24/2007 7:39:40 >>
MS-Access and SQL 2005 Application Roles
Posted by Dedge at 10/24/2007 9:19:01 AM
I have created an Application Role with DEFAULT_SCHEMA=dbo. From MS-Access, I retrieve the AR password and execute sp_setapprole as follows: qd.Connect = strODBCConnect qd.ReturnsRecords = False qd.SQL = "exec sp_setapprole 'MyAppName','" & strAppRolePWD & "';" qd.Execute Fine un...
more >>
Performance get bad if not using "Use Ansi Nulls, Spaces, ..." in
Posted by johnny72 at 10/24/2007 7:04:00 AM
I Want to select from an MSSQL-Server datatbase and realised, that if i do not use the flag mentioned above, the select takes much longer I'v got 2 tables with indices: create table XXX (S1 varchar (30) NOT NULL PRIMARY KEY, S2 int NOT NULL) create unique index Idx_X1 on XXX (S2) create t...
more >>
Re: error: odbc in vista (with asp)
Posted by OpenlinkEmma NO[at]SPAM gmail.com at 10/22/2007 7:18:02 PM
On Oct 20, 11:47 am, Milto...@gmail.com wrote: > i dont connect db (sql server 2005) in vista. why? > i have a cod in asp and no conect database. > > the error is: > > "Microsoft SQL Native Client Version 09.00.3042 > > Running connectivity tests... > > Attempting connection > [Microsoft...
more >>
important
Posted by Shirish at 10/20/2007 8:48:31 PM i...
more >>
error: odbc in vista (with asp)
Posted by MiltonPT NO[at]SPAM gmail.com at 10/20/2007 8:47:28 >>
error: odbc in vista
Posted by MiltonPT NO[at]SPAM gmail.com at 10/20/2007 8:21:58 >>
Re: MySQL linked server on SQL Server 2000
Posted by Pete Griffiths at 10/18/2007 12:00:00 AM
Thanks for that Paul - this looks useful! Pete "Paul HR" <phototopix@test.com> wrote in message news:%23ee45LKEIHA.3980@TK2MSFTNGP03.phx.gbl... > I'm not sure about a fix for the specific issue you mention above but > i've found this post providing details on making the connection via ODBC...
more >>
variations
Posted by Jeff Kish at 10/17/2007 2:48:24 PM confi...
more >>
Re: MySQL linked server on SQL Server 2000
Posted by Paul HR at 10/17/2007 2:49:12 AM
I'm not sure about a fix for the specific issue you mention above but i've found this post providing details on making the connection via ODBC between MSSQL2000 and MySQL. Hope this helps. Paul *** Sent via Developersdex h...
more >>
Fatloss computer program
Posted by Angel vasquez at 10/15/2007 7:24:58 >>
SqlExecDirect returns immediately while trigger is still executing
Posted by Andrey Medvedev at 10/15/2007 12:00:00 AM
Hello, All I am getting the following problem trying to insert value in the MS SQL 2005 table that has trigger on insertion taking some time to execute (it inserts 1000 records in another table). The problem is that SqlExecDirect returns immediately while trigger is still executing, causin...
more >>
HP Desktop for sale!
Posted by pbdude at 10/14/2007 3:58:08:52:57 >>
Re: updating a linked Access table
Posted by Mary Chipman [MSFT] at 10/11/2007 6:02:59 PM
Can you just do an end run around PB and push it into the SQLS table directly from Access? That would be my recommendation. -mary On Thu, 11 Oct 2007 13:00:01 -0700, jaylou <jaylou@discussions.microsoft.com> wrote: >Dont shoot the messenger.... >I inheited a system that uses an Access MD...
more >>
RE: Communication link failure [ODBC SQL Server Driver]
Posted by calton NO[at]SPAM online.microsoft.com at 10/11/2007 3:06:48 PM
What Operating System, Service Pack on the OS and NIC Card do you have on both sides of the conversation? ------------------------------------- Chris Alton, Microsoft Corp. SQL Server Developer Support Engineer This posting is provided "AS IS" with no warranties, and confers no rights. ------...
more >>
updating a linked Access table
Posted by jaylou at 10/11/2007 1:00:01 PM
Dont shoot the messenger.... I inheited a system that uses an Access MDB as the front end with local tables in it. There is a SQL server 2005 database which pulls data from the Access tables and inserts into SQL tables to create invoices. Information is keyed into Access, at the end of the ...
more >>
MySQL linked server on SQL Server 2000
Posted by Pete Griffiths at 10/11/2007 12:00:00 AM
Hi folks, I'm having a spot of bother creating a linked MySQL server on SQL2K. I'm using the following code on an SP4 SQL2K instance. MySQL 5.1 is installed on the same box, and I've installed the MySQL ODBC 3.51 Driver. sp_addlinkedserver 'myAlias' , 'MySQL' , 'MSDASQL' , Null ...
more >>
RE: Communication link failure [ODBC SQL Server Driver]
Posted by Uma at 10/9/2007 8:09:10 error ...
more >>
RE: Communication link failure [ODBC SQL Server Driver]
Posted by Uma at 10/9/2007 8:09:05 erro...
more >>
RE: Communication link failure [ODBC SQL Server Driver]
Posted by Uma at 10/9/2007 8:06:00 AM
I just found some helpful suggestion in the microsoft forum. I try to increase windows virtual memory, don't know if it works. I'll be back to report later. More suggestion is appriciate, Thank you in advance, Uma "Uma" wrote: > Hello ! > > Anyone, please help. > > I got the e...
more >>
Communication link failure [ODBC SQL Server Driver]
Posted by Uma at 10/9/2007 6:53:03 AM
Hello ! Anyone, please help. I got the error 08501 : [Microsoft][ODBC SQL Server Driver] Communication link failure, when I connect to database in SQL Server via some application and it happened very very often. I'm trying to find the way to solve on internet, but could not. Please su...
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/111_0_0_0_0_0/sql-server-odbc.htm | crawl-001 | refinedweb | 1,560 | 64.1 |
Created attachment 24861 [details]
Extensions of XWPF to insert tables, paragraphs with pictures, styles and numberings
Actually I have to problem, with adding tables and paragraphs to a document. For example if I have three paragraphs between two tables I am only able to insert a table between these two tables. But I cannot position the table excatly after the first of the three paragraphs. Initially I tried to handle this problem with XmlCursors, but debugging an XmlCursor is not a very funny. So I decided to create a marker Interface IBodyElement, which is implemented by XWPFParagraph and XWPFTable (Yes I call them BodyElements). On the other Hand I created the Interface IBody which is implemented by all Classes, who owns arrays with tables and paragraphs. (Actually this are the classes XWPFDocument, XWPFTableCell, XWPFHeaderFooter). In IBody there a some signature of methods which are handling the operations with XWPFTable and XWPFParagraph in a Body.
To support styles and numberings the classes XWPFStyles, XWPFStyle, XWPFNumbering, XWPFNum, XWPFAbstractNum are added. The classes XWPFPictureData and XWPFPicture were created to support pictures. This classes are quit equal to XSSFPictureData and XSSFPicture.
Hope you like my changes. :-)
Philipp Epp
Created attachment 24862 [details]
Test-Data of the test cases in the previous patch
this test-data must be added to Test-Data/documents/
Hello all
Maybe someone can look into this. The attached patch contains a major contribution to the XWPF classes. The new classes and methods allow the modification of existing objects to some extent.
If you need more information on the classes, the API or anything else, please let us know.
The classes were created in terms of a diploma thesis and we internally use them. For now we merge ongoing POI commits with this patch, until either POI accepts the commit or comes up with some other API which provides equal modification options.
Regards,
Stefan Stern
Hi at all,
Could somebody tell me what I did wrong with this patch? Is there any chance that it will be accepted like it is. Or do have to change it. (splitting it into smaller patches or changing the API). So that at least some parts of my patch will find a way into Apache POI.
Thank for your response
Philipp Epp
Thanks for this patch, applied to trunk in r953704 with only minor tweaks.
In future though, it's generally worth mentioning on the dev list when you're working on big contributions like this. That should allow you to get feedback, and give us a heads-up on the impending patch. This generally reduces the chances that everyone sees a monster patch and decides "I don't have time to review that now, I'll wait for someone else"...
Also, any chance you could send in an ICLA at some point when you have a minute? <>. Can just be emailed or faxed. It confirms that you understand the terms of the apache license that your contribution will be used under, and we like to have them from people who make large scale contributions so there's no confusion, and everyone's happy :)
Philipp,
Your patch included the following:
> +public class XWPFLatentStyles {
> + private CTLatentStyles latentStyles;
> + protected XWPFStyles styles; //LatentStyle shall know styles
Your patch didn't include a LatentStyle class. After 6 years, do you remember what LatentStyle was referring to?
Yes you're right. After 6 years I really do not know something about XWPF anymore. Sorry. Everything is gone. | https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=48574 | CC-MAIN-2020-29 | refinedweb | 577 | 63.09 |
Project Description:
Password Door Lock Security System using Arduino and Keypad- in this tutorial, you will learn how to make the most efficient Password protected door lock security system using Arduino and a keypad. When you enter the correct 4 digit password the door is opened for 5 seconds. Currently, the password is 1234, which you can change in the programming; you can even select a password consisting of more than 8 digits. I have checked this Password Door Lock Security System many times and it worked perfectly. If a wrong password is entered 3 times the person is locked out for 5 seconds and an LED is turned ON, which can be replaced with a buzzer. The number of wrong attempts can be increased or decreased as per the requirement.
In this tutorial, we will cover
- 4×4 keypad Pinout
- How the electronic lock works
- Complete circuit diagram
- Arduino programming and finally
- Testing
Without any further delay let’s get started!!!
Amazon Purchase Links:
12v Electronic Door Lock / Elock / Solenoid Lock:
Other Tools and Components:
Super Starter kit for Beginners
PCB small portable drill machines
*Please Note: These are affiliate links. I may make a commission if you buy the components through these links. I would appreciate your support in this way!
About the Password protected Door Lock Security System:
Nowadays security is the main concern. Each and every individual needs to feel secure. The main entrances, lockers, cabinets, etc should be protected with some kind of security systems. Doors locked using the conventional locks are not as safe as they used to be in the past, nowadays anyone can easily break-in by breaking these locks. The password-based door lock system allows only authorized persons to access the restricted areas. The entire project is controlled by using the Arduino. A 4×4 or 4×3 keypad can be used to enter the desired password.
About the 4×4 Keypad:
This is a 4×4 Keypad which means this Keypad has a total of 4 Rows and 4 Columns. But in this project, I am not using the 4th column.
4×4 Keypad Pinout:
As you can see the Keypad is provided with the female headers due to which it can be easily interfaced with the Arduino using the male headers or male to male type jumper wires.
The first one is row 1, the 2nd one is row 2, row 3, row 4, column 1, column 2, column 3, and column 4.
You can purchase the readymade one which you can see in the picture above, or you can build the one by yourself using Pushbuttons. In this project, you can also use a 4×3 Keypad. wire the Electronic Lock can be controlled. These two wires will be connected with the relay common and normally open legs.
Password Door Lock Security System Circuit Diagram:
This is the complete circuit diagram of the Password protected Door Lock Security system designed in CadeSoft Eagle 9.1.0 Version. If you want to learn how to make a schematic and PCB then watch my tutorial, given below.
Let’s start with the 4×4 keypad.
Row 1 is connected with the Arduino’s Analog pin A0.
Row 2 is connected with the Analog pin A1.
Row 3 is connected with the Analog pin A2.
Row 4 with the Analog pin A3.
Column 1 is connected with the Arduino’s Analog pin A4.
Column 2 is connected with the Analog pin A5.
Column 3 is connected with the Arduino’s digital pin 2, and column 4 is not connected.
Pin number1 and pin number 16 are connected with the Arduino’s ground. Pin number 2 and pin number 15 are connected with the Arduino’s 5 volts. Pin number 3 is the contrast pin of the 16×2 LCD and is connected with the middle leg of the Variable resistor or Potentiometer. While the other two legs of the variable resistor are connected with the Arduino’s 5 volts and ground. The RS pin of the LCD is connected with the Arduino’s pin number 10, the R/W pin is connected with the ground, the enable pin is connected with the Arduino’s pin number 9. The data pins D4 to D7 of the LCD are connected with the Arduino’s pin number 6, 5, 4, and 3.
An Led is connected with the Arduino’s pin number 12 through a 330-ohm resistor. This is a current limiting resistor.
A one-channel relay module is connected with the Arduino’s pin number 13. This relay is used to control the electronic Lock. As you can see the electronic lock 12 volts wire is connected with the common pin of the relay, while the normally open pin of the relay is connected with the 12 volts. While the ground wire of electronic Lock is directly connected with the ground of the 12 volts power supply.
About the Sponsor:
The PCB board used in this project is sponsored by the PCBway Company, which is one of the most experienced PCB and PCB assembly manufacturer. They create high-quality PCBs at reasonable prices. As you can see the quality is really great, the silkscreen is quite clear and the black solder mask looks amazing. I am 100% satisfied with their work.
The Gerber files of the PCB board used in this project can be downloaded by clicking on the link given below.
Download Gerber files:
High quality & Only 24 Hours Build time:
About the 16×2 LCD PCB board:
For the easy interfacing, I designed a PCB for the 16×2 LCD. This PCB is manufactured by the PCBway Company. As you can it looks amazing, now I can easily interface this LCD with the Arduino board using male to male type jumper wires.
4×4 Keypad and Electronic Lock Interfacing with Arduino:
All the connections are done as per the circuit diagram already explained. Now let’s have a look at the Arduino programming.
Password Door Lock Security System Arduino Programming:
Password Door Lock Security System Arduino Program Explanation:
#include <Keypad.h>
#include <LiquidCrystal.h>
I started off by adding libraries for the Keypad and 16×2 LCD.
const byte ROWS = 4; //four rows
const byte COLS = 3; //three columns
Next I defined the maximum number of rows and columns.
char keys[ROWS][COLS] = {
{‘1′,’2′,’3’},
{‘4′,’5′,’6’},
{‘7′,’8′,’9’},
{‘*’,’0′,’#’}
};
Then I defined a two dimensional array with keys information.
byte rowPins[ROWS] = {A0, A1, A2, A3}; //connect to the row pinouts of the keypad
byte colPins[COLS] = {A4, A5, 2}; //connect to the column pinouts of the keypad
The rows are connected with the Arduino’s Analog pins A0, A1, A2, and A3. While the columns are connected with A4, A5, and pin number 2.
// 16×2 LCD
#define rs 10
#define en 9
#define d4 6
#define d5 5
#define d6 4
#define d7 3
The RS pin of the LCD is connected with the Arduino’s pin number 10, EN pin is connected with pin number 9, d4 is connected with 6, d5 is connected with 5, d6 is connected with 4, and d7 pin of the LCD is connected with the Arduino’s pin number 3.
LiquidCrystal lcd(rs, en, d4, d5, d6, d7);
Initialize the library with the numbers of the interface pins
String password = “1234”;
The current password is 1234 which you can replace with any other password.
String mypassword;
The variable mypassword is used to store the password which is entered using the keypad.
int redled = 12;
int lock = 13;
An led is connected with the Arduino’s pin number 12 and the electronic lock is connected with the digital pin 13 of the Arduino.
int counter = 0;
Counter is a variable of the type integer and it is used to count the number of keys pressed, which helps me set the LCD cursor.
int attempts = 0;
int max_attempts = 3;
The variable attempts is used to store the number of wrong attempts, while the maximum number of wrong attempts is equal to 3.
void setup(){
Serial.begin(9600);
// set up the LCD’s number of columns and rows:
lcd.begin(16, 2);
pinMode(redled, OUTPUT);
pinMode(lock, OUTPUT);
digitalWrite(redled, LOW);
digitalWrite(lock, LOW);
Serial.println(“enter password”);
lcd.print(“Enter Password:”);
}
In the void setup function I activated the serial communication for the debugging purposes.
Set up the LCD’s number of columns and rows. The LED and electronic lock are set as the output using the pinMode() function. Turned OFF the led and electronic lock using the digitalWrite() function and finally, printed the Enter Password message on the LCD.
void loop()
{
keypadfunction();
}
Then starts the void loop function. As you can see the void loop function has only one function which is the keypadfunction().
Keypadfunction() is a user-defined function which has no return type and does not take any arguments as the input.
char key = keypad.getKey();
The getKey() function is used to read the pressed key and is stored in the variable key.
The following condition means if the key is pressed then simply send it to the Serial monitoring for the checking purposes, increment the counter by 1 and use the counter to update the cursor.
If 1 is pressed then add 1 to the mypassword, if 2 is pressed then add 2 to the mypassword, and so on for all the keys.
if (key){
Serial.println(key);
counter = counter + 1;
lcd.setCursor(counter, 1);
lcd.print(“*”);
}
if (key == ‘1’)
{
mypassword = mypassword + 1;
}
if (key == ‘2’)
{
mypassword = mypassword + 2;
}
if (key == ‘3’)
{
mypassword = mypassword + 3;
}
if (key == ‘4’)
{
mypassword = mypassword + 4;
}
if (key == ‘5’)
{
mypassword = mypassword + 5;
}
if (key == ‘6’)
{
mypassword = mypassword + 6;
}
if (key == ‘7’)
{
mypassword = mypassword + 7;
}
if (key == ‘8’)
{
mypassword = mypassword + 8;
}
if (key == ‘9’)
{
mypassword = mypassword + 9;
}
if (key == ‘0’)
{
mypassword = mypassword + 0;
}
Next we compare the password entered.
if (key == ‘*’)
{
Serial.println(mypassword);
if ( password == mypassword )
{
lcd.clear();
lcd.println(“Welcome To”);
lcd.setCursor(0,1);
lcd.println(“ElectroniClinic”);
digitalWrite(lock, HIGH);
delay(5000);
digitalWrite(lock,LOW);
mypassword = “”;
counter = 0;
lcd.clear();
lcd.setCursor(0,0);
lcd.println(“Enter password”);
}
If the asterisk key is pressed then compare the password entered with the pre-defined password. If both the passwords are same then print Welcome to electronic Clinic on the lcd, open the door, wait for 5 seconds and again close the door. Empty the mypassword and counter variables. Clear the LCD, Set the LCD cursor, and write Enter Password on the LCD.
else
{
Serial.println(“wrong”);
digitalWrite(lock, LOW);
attempts = attempts + 1;
if (attempts >= max_attempts )
{
lcd.clear();
lcd.setCursor(0,0);
lcd.print(“Locked Out”);
digitalWrite(redled, HIGH);
delay(5000);
digitalWrite(redled, LOW);
attempts = 0;
}
If a wrong password is entered, lock the door, means do nothing, increment the attempts, if the number of wrong attempts are greater than or equal to the value stored in the max_attempts variable then clear the LCD, type locked out on the LCD, turn ON the led for 5 seconds, and finally set attempts equal to zero.
mypassword = “”;
counter = 0;
lcd.clear();
lcd.setCursor(0,0);
lcd.print(“Wrong Password”);
delay(1000);
lcd.setCursor(0,1);
lcd.print(“max attempts 3”);
delay(1000);
lcd.clear();
lcd.println(“Enter password”);
lcd.setCursor(0,1);
And finally, some messages to the LCD which reminds you of the maximum wrong attempts.
For the practical demonstration watch video given below. Don’t forget to Subscriber to my Website and YouTube channel “Electronic Clinic”. Support my Website and channel by sharing articles and videos. If you have any questions regarding this project or any other project, let me know in a comment.
Watch Video Tutorial:
Related Project:
Arduino password security system, Enter a Password using only one button
Arduino password security system, Enter a Password using only one button
1 Comment
Thank you so much for this wonderful project I like it and I think it is the best tutorial I have seen up till now, their is only one thing I want to ask you . Is it possible to get redimate pcb for the lcd 16×2 because I could not find tutorial for it, if I could find a fully prepared pcb board for this project then it will be so much helpful. I appreciate your work, It has got quality. Great work…..👍👌😃 | https://www.electroniclinic.com/password-door-lock-security-system-using-arduino-and-keypad/ | CC-MAIN-2021-25 | refinedweb | 2,068 | 62.17 |
370 Java Interview Questions – Crack your next Java interview and grab a dream job
TechVidvan is committed to make you a successful Java developer. After the detailed Java tutorials, practicals, and projects, we have come up with interesting Java interview questions and answers.
In this series, we will provide
In this article, we will discuss Java interview questions and answers for freshers. The reason we are sharing these interview questions is that you can revise all your fundamental concepts. The interviewer will surely check your Java fundamentals.
Java Interview Questions for Freshers
Q.1. What are the main features of Java?
Answer. Java programming language is the most popular and widely used language. It is due to the most remarkable features that it comes with. These features are also called the buzzwords of Java. Some of these features are:
1. Java is simple to learn and understand- Java is very easy to learn, understand, and implement. It is simple because it avoids the use of complex features of C and C++ like explicit pointers, operator overloading, manual garbage collection, storage classes, etc.
2. Java is a platform-independent language- This feature is one of the most remarkable features of Java that makes it so popular. The compiled Java code is platform-independent and it can be run on any operating system.
3. Java is an object-oriented language- Java supports all the concepts of Object-Oriented Programming and everything is treated as an object. The term object-oriented means that we organize our software or the application as a combination of different objects and these objects contain both data and methods. Java supports all the OOPS features like Class, Encapsulation, Abstraction, Inheritance, Polymorphism.
4. Java is a secure language- Java provides security as it enables tamper-free and virus free systems. Java is best known for this feature. The other reasons for Java being a secure language are:
- Java does not support explicit pointer
- All the Java programs run inside a virtual machine sandbox.
- There is a Bytecode verifier that checks the code fragments for illegal code.
5. Java is a multithreaded language- Java also supports the feature of multithreading. Multithreading is a process of running multiple threads simultaneously. This feature helps developers to build interactive applications. The main advantage of multithreading is that it does not occupy memory for each thread rather there is a common/shared memory area.
6. Java is distributed- Java is a distributed language as it enables users to create distributed applications. RMI(Remote Method Invocation) and EJB(Enterprise Java Beans) are used to develop distributed applications in Java.
7. Java is dynamic- Java is a dynamic language and supports the dynamic loading of classes. The classes can be loaded dynamically on demand. Java also supports dynamic compilation and automatic garbage collection(memory management). Therefore, Java is a dynamic language.
Q.2. Is Java a platform-independent language? If yes, Why?
Answer. Yes, Java is a platform-independent language. If one compiles a Java program on a machine then this compiled code can be executed on any machine in the world, irrespective of the underlying Operating system of the machine.
Java achieves the platform independence feature with the use of Byte code. Byte code is the intermediate code generated by the compiler which is basically platform-independent and can be run on any machine. The JVM(Java Virtual Machine) translates the bytecode into the machine-dependent code so that it can be executed on any Operating system. For example, we can write Java code on the Windows platform and can run the generated bytecode on Linux or any other supported platform. These can be achieved by the platform-independent feature of Java.
Q.3. What is a class in Java?
Answer. A class is a template or a blueprint that allows us to create objects from it. A class is basically a collection of data members and member functions that are common to its objects. For example, consider a class Polygon. This class has properties like color, sides, length, breadth, etc. The methods can be draw(), getArea(), getPerimeter(), etc.
Q.4. What is javac?
Answer. javac is a Java compiler that compiles the Java source code into the bytecode. It basically converts the .java files into .class files. These .class files are the bytecode which is platform-independent. Then JVM executes the bytecode to run the program. While compiling the code, we write the javac command and write the java file name. For example:
javac MyProgram.java
Q.5. What is method overloading in Java?
Answer. Method Overloading is a concept in which a class can have more than one method with the same name but with a different list of arguments. The overloaded method can contain a different number or type of arguments but the name of the methods should be the same. For example, a method add(int, int) with two parameters is different from the method add(int, int, int). We can overload a method using three different ways:
- Number of arguments
add(double,double)
add(double, double, double)
- Datatype of parameters
add(int,double)
add(float,int)
- Sequence of parameters
add(float, int)
add(int, float)
Method overloading can not be achieved by changing the return type of methods. Method overloading is an example of static polymorphism or compile-time polymorphism in Java.
Q.5. What is Method Overriding in Java?
Answer. This is a popular Java interview question. Method Overriding is a feature in which the child class overrides the method of the superclass with a different implementation. To override a method the signature of the method in the child class must be the same as that of the method in the superclass that has to be overridden. Method Overriding can only be achieved in different classes and only with the help of Inheritance. Method Overriding is an example of dynamic or runtime polymorphism.
Q.6. Does Java support Operator Overloading?
Answer. No, there is no support for operator overloading in Java. Unlike C++, Java does not support the feature of operator overloading in which one operator can be overloaded. But internally, Java overloads operators, for example, String concatenation is done by overloading the ‘+’ operator in Java.
Q.7. What is Encapsulation Java?
Answer. Encapsulation is one of the Object-Oriented features that refer to wrapping up or binding of data members and functions into a single unit called class. The main idea of this concept is to hide the implementation details from the users. We can achieve the encapsulation by making the data members private and only the same class members can access these private members. Another way to achieve encapsulation is to use getter and setter methods.
Q.8. What is Inheritance in Java?
Answer. This is an important Java interview question of oops. Inheritance is another important feature of Java in which a child class inherits all the properties and functionalities from the parent class using the ‘extends’ keyword. Using Inheritance, we can achieve reusability of the code in our Java application because the same things need not be written every time they are needed, they just need to be extended whenever required.
Java supports single, multilevel, hierarchical inheritance with the use of classes, and the multiple inheritances in Java is achieved through interfaces, not the classes.
Q.9. Does Java support Multiple Inheritance?
Answer. Multiple Inheritance is an Inheritance in which a Java class can inherit more than classes at the same time. Java does not support multiple inheritances with the classes but we can achieve it by using multiple interfaces. Java does not allow Multiple Inheritance because it causes ambiguity.
Q.10. What is an abstract class in Java?
Answer. An abstract class is a special class in Java that contains abstract methods(methods without implementation) as well as concrete methods(methods with implementation). We declare an abstract class using the abstract keyword. An abstract class can not be instantiated; we can not create objects from abstract class in Java. We can achieve partial to complete abstraction using abstract class. Let’s see the syntax of declaring an abstract class:
abstract class MyClass { abstract void myMethod(); //abstract method public void display() //concrete method { //method body } }
Q.11. What is an interface in Java?
Answer. An interface in Java is like the normal class in Java that contains the data members and methods, but unlike classes, the interface must contain only and only abstract methods in it. The abstract methods are the methods without method body or implementation. Interfaces are used to achieve full abstraction in Java. Interfaces are declared using the interface keyword. A class can implement interfaces using implements keyword and can implement all the methods of the interface.
Declaration of an interface:
interface MyInterface { //data members //abstract methods }
Q.12. State the difference between an abstract class and the interface?
- The main difference between abstract class and the interface is that an abstract class may have abstract as well as non-abstract or concrete methods but the interface must have only abstract methods in it.
- Another difference between both of them is that abstract classes can have static methods in it but an interface does not have static methods.
- An abstract class is declared with an abstract keyword and we declare the interface with an interface keyword.
- A class in Java can implement multiple interfaces but can extend only one abstract class.
- An abstract class can provide partial to full abstraction but with interfaces, we get full abstraction.
Q.13. What is ‘this’ keyword?
Answer. A ‘this’ keyword is a reserved word in Java which is a kind of reference variable and used to refer to the current object of the class. Its use is to refer to the instance variable of the current class and to invoke the current class constructor. We can pass this keyword as an argument while calling a method. We can also pass it as an argument in the constructor call. We cannot assign null values to this keyword.
Q.14. What do you mean by abstraction in Java?
Answer. Abstraction is an object-oriented concept by virtue of which we can display only essential details to the users and hiding the unnecessary details from them. For example, if we want to switch on a fan we just need to press the switch we do not require to know about the internal working of the switch.
In Java, we can achieve or implement abstraction in Java using abstract classes or interfaces. We can achieve 100% abstraction with interfaces and 0 to 100% abstraction with the abstract classes.
Q.15. What is a static variable in Java?
Answer. A static variable or class level variables are variables that are used to refer to the common properties of the object. For example, the company name for the employees of the company will be the same for all. The static variables are declared using the ‘static’ keyword.
The static variables get the memory area only for one time in the class area when the class gets loaded. The static variable makes the Java program memory efficient by saving memory. The life of the static variable is the entire execution of the program.
Java Basic Interview Questions
Now, let’s discuss more basic Java interview questions, which will help you in showcasing your rock-solid fundamentals and cracking the interview.
Q.16. What is a static method?
Answer. A static method is a method that we can directly call using the class rather than objects. The static methods belong to the class rather than instances or objects. We can call static methods without creating objects of the class. The static methods are used to access static variables or fields.
The use of a static method in Java is to provide class level access to a method where the method should be callable without creating any instance of the class. We declare static methods using the static keyword. We cannot override the static methods but we can overload them.
Declaring and calling the static method:
public class MyClass { public static myMethod() //defining static method { //method body } public static void main(String args[]) { MyClass.myMethod(); //calling static method directy using the cass } }
Q.17. Explain the super keyword with its use.
Answer. A super keyword is a reference word in Java that is used to refer to the objects of the immediate parent class or the superclass.
- The use of the super keyword is to access the data members of the parent class when the child class and the parent class both contain a member with the same name. Then if we want to access the parent class data member then we use the super keyword to access it.
- Another use of a super keyword is to access the method of the parent class when the child class overrides that method.
- Another use of a super keyword to invoke the constructor of the parent class.
Example:
super.variableName; super.methodname();
Q.18. What is the use of the final keyword in Java?
Answer. A final keyword in Java is a reserved wood used for a special purpose. The final keyword is used with variables, methods, and classes in Java. We will discuss each of them:
Final variable: When we declare a variable using the final keyword, then this variable acts like a constant. Once we define the value of the final variable then we cannot change its value; it becomes fixed.
Final method: When a method is declared with the final keyword, then we can not override it in the child class. Any other method of the child class cannot override the final methods.
Final class: A class when declared with the final keyword, then it cannot be extended or inherited by the child classes. The final classes are useful when we do not want a class to be used by any other class or when an application requires some security.
Q.19. What are Polymorphism and its types in Java?
Answer. Polymorphism is an Object-Oriented concept that enables an object to take many forms. When the same method behaves in different forms in the same class on the basis of the parameters passed to it, then we call it Polymorphism in Java. The word Polymorphism can be divided into two words: poly-means and morph-means forms.
Java provides two types of Polymorphism:
- Compile-time or static polymorphism
- Runtime or dynamic polymorphism
Q.20. Can you overload a main() method in Java?
Answer. Method Overloading is a feature in which a class can have the same method with different parameters list. And yes, it is possible to overload a main() method like other methods in Java, but cannot override it. When we overload the main() method, the JVM still calls the original main() method during the execution of the program.
Example:
public static void main(int args) public static void main(char args) public static void main(Integer[] args) public static void main(String[] args
Q.21. What are the differences between static and non-static methods?
Answer. Non-static methods are the normal method that can access any static variables and static methods. Static methods are declared with a static keyword and can only access static data members of the main class or another class but are unable to access non-static methods and variables.
The second difference is that we can call a static method without creating an object of the class but we cannot call non-static members directly through class, we can only call by creating an object of the class.
Q.22. What is a constructor in Java?
Answer. A constructor in Java is a block of code that is used to initialize a newly created object. It is a special method that we do not call using an object, but it is automatically called when we instantiate an instance of the class. That is, when we use the new keyword to instantiate a class then the constructor gets called.
Constructors resemble methods in java but the difference is that they cannot be declared as abstract, final, static, or synchronized in Java. We can also not inherit or extend the constructors. Also, they do not return anything, not even void. One important thing to note is that the constructor must always have the same name as that of a class.
There are two types of Java constructors:
- Default Constructor or no-argument constructor
- Parameterized constructor or argument constructor
Q.23. Can you declare constructors with a final keyword?
Answer. Though constructors resemble methods in Java, there are some restrictions. The constructors cannot be declared final in Java.
Q.24. What is a static block in Java?
Answer. A block is a sequence of statements written in curly braces. A block declared with a static keyword is the static block in Java. The use of static block os to initialize the static variables. There can be multiple static blocks in a class. The static blocks are loaded into the memory when a class gets initialized. They execute only once. They are also called static initialization blocks.
Their syntax is:
static { //statement/s }
Q.25. Explain-public static void main(String args[]) in Java?
Answer. This statement is declaring a main() method of a Java class. Let’s discuss each of its keywords:
- public-This is one of the access modifiers that means that the method is accessible anywhere by any class.
- static- static keyword tells that we can access the main() method without creating the object of the class.
- void- The void keyword tells that the main() method returns nothing.
- main- This is the name of the method.
- String args[]- args[] is the name of the String array. It contains command-line arguments in it that the users can pass while executing the program.
Q.27. What are packages in Java and what are the advantages of them?
Answer. A package in Java is an organized collection of related classes, interfaces, and sub-packages. We can think of a package as a folder that contains files. We write the package name in the starting of the code with the package keyword and when we want to use any class or interface of the package in another class or interface then we use it using the import keyword of Java.
There are two kinds of packages in Java:
- Built-in packages-provided by the Java API
- User-Defined/custom packages-created by users.
Advantages of using packages are:
- They prevent naming conflicts.
- They make searching or locating classes and interfaces easier.
- They provide controlled access
Q.28. What are access modifiers in Java?
Answer. Access Modifier in Java is used to restrict the scope of variable, class, method, constructor, or an interface in Java. There are four types of access modifiers in Java:
public, private, protected, and default.
public: We use this access specifier using the public keyword in Java. The public specifier has the widest scope among all the access modifiers in Java. The members that are declared with the public access specifiers are accessible from anywhere in the class even outside the class. We can access them within the package and outside the package.
private: We use this access specifier using the private keyword in Java. The private specifier has the most restricted scope among all the access modifiers in Java. The private data members can only be accessed from within the same class. We cannot access them outside the class, not even in the same package.
protected: We use this access specifier using the protected keyword in Java. Its access is restricted within the classes of the same packages and the child classes of the outside packages. If we do not create a child class, then we cannot access the protected members from the outside package.
default: If we do not write any access modifier while declaring members then it is considered to be the default access modifier. The access of the default members is only within the package. We cannot access them from the outside package.
Q.29. What is an object in Java? How can you create an object in Java?
Answer. An object is a real-world entity that has characteristics and behavior. It is the most basic unit of Object-oriented programming. It has some state, behavior, and identity. An object in Java is an instance of a class that contains methods and properties in it. We can only make the data users with the use of objects.
We can create an object using the new keyword in Java like this:
ClassName objectName = new ClassName();
Q.30. What is a break statement?
Answer. A break statement is a statement that we use in the loops to terminate a loop and the control goes automatically to the immediate next statement following the loop. We can use the break statement in loops and switch statements in Java. It basically breaks the current flow of the program in some particular conditions.
Q.31. What is a continue statement?
Answer. A continue statement is a statement used with the loops in Java. Whenever this continue keyword is encountered then the control immediately jumps to the beginning of the loop without executing any statements after the continue statement. It basically stops the current iteration and moves to the next iteration.
Q.32. What is constructor chaining in Java?
Answer. Constructor Chaining in Java is the process of calling one constructor from another constructor with respect to the current object. The main aim of constructor chaining is to pass parameters using a bunch of different constructors, but the initialization takes place from a single place.
Constructor Chaining process can be performed in two ways:
- Using this keyword to call constructors in the same class.
- Using the super keyword to call the constructors from the base class.
Java Interview Questions and Answers
We hope you are enjoying the Java interview questions and answers. Now, we are going to focus on:
- Java Interview questions on String
- Java Interview questions on OOPS
- Java Interview questions on Multithreading
- Java Interview questions on Collections
Q.33. Tell about the types of Inheritance in Java?
Answer. Inheritance is the process of acquiring properties from the parent class. There are 5 types of Inheritances in Java which are:
1. Single Inheritance- When one child class inherits from a single base class, then it is single inheritance.
2. Hierarchical Inheritance- When more than one child classes inherit from a single parent class, then it is called the Hierarchical Inheritance.
3. Multilevel inheritance- When there is child class inheriting from a parent class and that child class then becomes a parent class for another class, then this is said to be multilevel Inheritance.
4. Multiple Inheritance- Java does not support Multiple Inheritances through the class due to the ambiguity problem caused by it. Therefore java uses Interfaces to support Multiple Inheritance. In this, one interface can inherit more than one parent interface.
5. Hybrid Inheritance- Hybrid Inheritance is a combination of different Inheritances.
Q.34. Name some Java IDEs.
Answer. A Java Integrated Development Environment is an application that allows developers to easily write as well as debug programs in Java. An IDE is basically a collection of various programming tools that are accessible via a single interface. It also has several helpful features, such as code completion and syntax highlighting. Java IDE(Integrated Development Environment) provides a Coding and Development Environment in Java.
Some of Java IDEs are:
- NetBeans
- Eclipse
- Intellij
- Android Studio
- Enide Studio 2014
- BlueJ
- jEdit
- jGRASP
- jSource
- jDeveloper
- DrJava
Q.35. What do you mean by local variable and instance variable in Java?
Answer. Local Variables are the variables that are declared inside a method body or a block or a constructor. Local variables are accessible only inside the blocking which they are declared. We can declare them at the start of a java program, within the main method inside the classes, methods, or constructors.
Instance variables or class variables are the variables declared inside the class and outside the function or constructor. These variables are created at the time of object creation and are accessible to all the methods, blocks, or the constructors of the class.
Q.36. What do you mean by Exception?
Answer. An Exception is defined as an abnormal condition that occurs during the execution of the program. Exceptions can arise due to wrong inputs given by the user or if there is a wrong logic present in the program.
For example, if a user tries to divide a number by zero in his code, then the program compiles successfully but there is an arithmetic exception when he executes the program. There are two types of Exceptions in Java which are- Checked Exceptions and Unchecked Exceptions.
Q.37. Differentiate between Checked and Unchecked Exceptions.
Answer. Checked Exceptions: Checked Exceptions are the exceptions checked during the compilation of the program. If the method is throwing a checked exception then it should provide some way to handle that exception using a try-catch block or using throws keyword, otherwise, the program gives an error. Some Checked Exceptions in Java are:
- FileNotFoundException
- SQLException
- IOException
- ClassNotFoundException
Unchecked Exceptions: Unchecked Exceptions are Exceptions that are checked during the runtime of the program. If there is an exception in a program and even there is no code to handle it, then the compiler will not throw any error. They are thrown at the execution of the program. Some of the Unchecked Exceptions in Java re:
- Arithmetic Exception
- NullPointerException
- ArrayIndexOutOfBoundsExcpetion
- NumberFormatException
- IllegalArgumentException
Q.38. Differentiate between the throw and the throws keyword.
Answer. Both throw and throws keywords are used in Exception handling in Java. The differences between both of them are:
1. The throw keyword is used inside the method body to throw an exception, while the throws keyword is present in the method signature to declare the exceptions that may arise in the method statements.
2. The throw keyword throws an exception in an explicit manner while the throws keyword declares an exception and works similar to the try-catch block.
3. The throw keyword is present before the instance of Exception class and the throws keyword is present after the Exception class names.
4. Examples:
throw new ArithmeticException(“Arithmetic”);
throws ArithmeticException;
Q.39. What is Exception Handling Java? What are some different ways to handle an exception?
Answer. Exception handling in Java ensures that the flow of the program does not break when an exception occurs. Exception handling in Java provides several ways from which we can prevent the occurrence of exceptions in our Java program. We can handle exceptions in Java using: try and catch block, finally keyword, throw and throws clauses, and custom exceptions.
Q.40. How does Java achieve high performance?
Answer. Java provides high performance by the use of JIT compiler- Just In Time compiler, that helps the compiler to compile the code on demand. The compilation will occur according to the demand; only that block will be compiled which is being called. This feature makes java deliver high performance. Another reason is the Automatic Garbage Collection in Java that also helps Java enable high performance.
Q.41. What is the use of abstract methods?
Answer. An abstract method is a method having no method body. It is declared but contains no implementation. The use of abstract methods is when we need a class to contain a particular method but want that its actual implementation occurs in its child class, then we can declare this method in the parent class as abstract. This abstract method can be used by several classes to define their own implementation of the method.
Q.42. Define JVM.
Answer. Java Virtual Machine is a virtual machine in Java that enables a computer to execute the Java code. JVM acts like a run-time engine for Java that calls the main method present in the Java program. JVM is the specification implemented in the computer system. JVM compiles the Java code and converts them to a Bytecode which is machine-independent and close to the native code.
Q.43. Differentiate between JVM, JDK, and JRE.
Answer.
- JDK stands for Java Development Kit, while JRE stands for Java Runtime Environment, while the full form of JVM is Java Virtual Machine.
- JVM is an environment to execute or run Java bytecode on different platforms, whereas JDK is a software development kit and JRE is a software bundle that allows Java programs to run.
- JVM is platform-independent, but both JDK and JRE are platform dependent.
- JDK contains tools for developing and debugging Java applications whereas JRE contains class libraries and other tools and files, whereas JVM does not contain software development tools.
- JDK comes with the installer, on the other hand, JRE only contains the environment to execute source code whereas
- JVM is bundled in both JDK and JRE.
Q.44. What is a NullPointerException in Java?
Answer. NullPointerException is a Runtime or Unchecked Exception in Java and it occurs when an application or a program attempts to use an object reference that has a null value. It is a situation when a programmer tries to access or modify an object that has not been initialized yet and points to nothing. It means that the object reference variable is not pointing to any value and refers to ‘null’ or nothing.
Some situations of getting NullPointerException include:
- When we call an instance method on the object that refers to null.
- When we try to access or modify an instance field of the object that refers to null.
- When the reference type is an array type and we are taking the length of a null reference.
- When the reference type is an array type and we try to access or modify the slots of a null reference.
- If the reference type is a subtype of Throwable and we attempt to throw a null reference.
Example:
Object obj = null; obj.toString(); // This statement will throw a NullPointerException
Q.45. What is a wrapper class in Java?
Answer. A wrapper class is a predefined class in Java that wraps the primitive data types values in the form of objects. When we create the object of a wrapper class, it stores a field and we can store primitive data types in this field. We can wrap a primitive value into an object of the wrapper class.
There are 8 wrapper classes corresponding to each primitive data type in Java. They are:
All these classes are present in the java.lang package.
Q.46. State the difference between a constructor and a method in Java?
Answer. Again, a popular Java interview question. The differences between constructor and method are:
- The constructor initializes an object of the class whereas the method exhibits the functionality of an object.
- Constructors are invoked implicitly when the object is instantiated whereas methods are invoked explicitly by calling them.
- The constructor does not return any value whereas the method may or may not return a value.
- In case a constructor is not present in the class, the Java compiler provides a default constructor. But, in the case of a method, there is no default method provided.
- The name of the constructor should be the same as that of the class. But, the Method name should not be of the same name as that of class.
Q.47. What is the need for wrapper classes in Java?
Answer. As we know that Java is an object-oriented programming language, we need to deal with objects in many situations like in Serialization, Collections, Synchronization, etc. The wrapper classes are useful in such different scenarios. Let us the need for wrapper class in Java:
1. Changing the value in Method: Java only supports the call by value, and, if we pass a primitive value, the original value will not change. But, if we convert the primitive value into an object using the wrapper class, there will be a change to the original value.
2. Synchronization: Java synchronization works with objects so we need wrapper class to get the objects.
3. Serialization: We convert the objects into byte streams and vice versa. If we have a primitive value, we can convert it into objects using wrapper classes.
4. Collection Framework: Collection framework in Java deals with only objects. All the classes of the collection framework like ArrayList, LinkedList, Vector, HashSet, LinkedHashSet, TreeSet, PriorityQueue, etc deal with objects only.
Q.48. Can you overload a constructor in Java?
Answer. Yes, it is possible to overload constructors in Java. We can define multiple constructors with different parameter types, their order, and number.
Constructor overloading is a technique in Java that allows a class to have any number of constructors that differ in the parameter lists. The compiler differentiates these constructors with respect to the number of parameters in the list and their type.
Q.49. Which is the parent class for all the classes in Java?
Answer. The Object class is the superclass for all the classes in Java. In other words, all the classes in Java ultimately inherit from Object class. To prove this, let’s see an example:
class Test { public static void main(String args[]) { System.out.println("Helloworld"); } }
For the above program, when we type javap Test then we get the following output:
class Test extends java.lang.Object { Test(); public static void main(java.lang.String[]); }
The first line itself shows that by default it extends java.lang.Object.
Q.50. Can you overload a main() method in Java?
Answer. Yes, we can overload the main() method in Java. We need to call the overloaded main() method from the actual main() method of the class. The overloaded main method needs to be called from inside the “public static void main(String args[])” statement. As this line is the entry point when JVM launches the class.
Q.51. What do you mean by the array in Java?
Answer. This is a Java collections interview question. An array in Java is a collection of similar types of data arranged in contiguous memory locations. It is a kind container that holds data values of one single data type. For example, we can create an array holding 100 values of int type. Arrays are a fundamental construct in Java that allows us to store and access a large number of values conveniently.
Array Declaration:
In Java, here is how we can declare an array.
dataType arrayName[];
- dataType – it can be primitive data types like int, char, double, byte, etc. or Java objects
- arrayName – it is an identifier
Example:
double doubleArray[];
String myArray[];
Array Initialization:
To initialize an array we use:
dataType arrayName = new dataType[arraySize];
Example:
int arr[] = new int[10];
Array arr can hold 10 elements.
Q.52. What are the different data types in Java?
Answer. There are two different types of data types in Java: Primitive Data types, and reference data types. There are eight primitive data types in Java: int, short, byte, long, char, boolean, float, and double. Examples of reference data types are arrays, strings, interfaces, etc.
Q.53. What do you mean by UNICODE in Java?
Answer. Unicode System is a universal international standard character encoding that represents most of the written languages of the world. The main objective of Unicode is to combine different language encoding schemes in order to avoid confusion among computer systems that use limited encoding standards like ASCII, EBCDIC, etc. Java was designed to use Unicode Transformed Format (UTF)-16 when the UTF-16 was designed.
Q.54. What are the advantages and disadvantages of arrays?
Answer.
Advantages of arrays:
- It is easier access to any element of an array using the index.
- With an array, it is easy to manipulate and store large data.
Disadvantages of arrays:
- Arrays are of fixed size. We can not increase or decrease it once we declare it.
- An array can store only a single type of primitives.
Q.55. What is the difference between static and dynamic binding in Java?
Answer. If linking between method call and method implementation resolves at compile-time, then it is called static binding. And, if the linking gets resolved at run time then it is dynamic binding. The dynamic binding uses objects to resolve to bind, while static binding uses the type of the class and fields for binding.
Q.56. What is the difference between inner and anonymous inner classes?
Answer: A class inside a class is called nested classes in Java. An inner class is any nested class that is non-static in nature. Inner classes can access all the variables and methods of the outer class.
Anonymous inner class is any local inner class without any name. We can define and instantiate it in a single statement. Anonymous inner classes always either extend/inherit a class or implement an interface. Since there is no name of an anonymous inner class, it is not possible to create its constructor.
Q.57. What are the statements in Java?
Answer. Statements are like sentences in natural language. A statement gives a complete unit of execution. We can make the following types of expressions into a statement by terminating the expression with a semicolon
- Assignment expressions
- Any use of ++ or —
- Method calls
- Object creation expressions
The above statements are called expression statements. There are two other kinds of statements in addition to these expression statements. A declaration statement declares a variable. A control flow statement regulates the order or the flow in which statements get executed. The for loop and the if statement is some examples of control flow statements.
Q.58. What is the difference between the boolean & and && operator?
Answer. Both operands are evaluated if an expression involving the Boolean & operator is performed. After that, the & operator is applied to the operand.
When there is an evaluation of an expression involving the && operator, then the first operand is evaluated. If the first operand returns true then the second operand is evaluated. Then, the && operator is applied to the first and second operands. If the first operand results to false, then there is no evaluation of the second operand.
Q.59. How do you name Java source code files?
Answer. The name of a source code file of Java is the same as the public class or interface defined in the file. In a source code file, there is at most one public class or interface. The source code file must take the name of the public class or interface if there is a public class or interface in a source code file. And, if there is no public class or interface present in a source code file, then the file must take on a name that is different from its classes and interfaces. Source code files use the .java extension.
Q.60. If you declare a class without any access modifiers, then where it is accessible?
Answer. If we declare a class that without any access modifiers, we call the class to have a default or package access. This means that the class is only accessible by other classes and interfaces that are defined within the same package. No classes or interfaces outside the package can access this class.
Q.61. State the purpose of the Garbage Collection in Java.
Answer. The purpose of garbage collection in Java is to detect and eliminate/delete the objects that are no longer in use in the program. The objects that are no longer reachable are removed so that their resources may be reclaimed and reused.
Q.62. What is JNI? What are its advantages and disadvantages?
Answer. The full form of JNI is the Java Native Interface. With the help of JNI, we can call functions written in languages other than Java.
The advantages and disadvantages of JNI are:
Advantages:
- When we want to use the existing library that we previously developed in another language.
- When there is a need to call the Windows API function.
- To increase the execution speed.
- When we need to call the API function of some server product which is written in C or C++ from a Java client.
Disadvantages:
- There is a difficulty in debugging runtime errors in native code.
- There may be a potential security risk.
- We can not call it from Applet.
Q.63. What is Serialization in Java?
Answer. Serialization in Java enables a program to read or write a whole object in byte stream and to read that byte stream back to the object. It allows Java objects and primitive data types to be encoded into a byte stream so that it is easy for streaming them to some type of network or to a file-system.
A serializable object must implement the Serializable interface that is present in the java.io package. We use ObjectOutputStream class to write this object to a byte stream and ObjectInputStream to read the object from the byte stream.
Q.64. Why does Java not have multiple inheritances?
Answer. This is one of the most important Java oops interview questions. Java introduced Java language to make it:
- Simple and familiar
- Object-oriented
- Robust
- Secure
- Architecture neutral
- Portable
- High performance
- Multi-threaded and Dynamic
The reasons for not supporting multiple inheritances mostly arise from the goal of making Java simple, object-oriented, and familiar. The creators of Java wanted that most developers could grasp the language without extensive training. For this, they worked to make the language as similar to C++ as possible without carrying over its unnecessary complexity.
According to Java designers, multiple inheritances cause more problems and confusion. So they simply cut multiple inheritances from the language. The experience of C++ language taught them that multiple inheritances just was not worth it. Due to the same reason, there is no support for Multiple Inheritance in Java.
Q.65. What is synchronization in Java and why is it important?
Answer. Synchronization in Java is the ability to control the access of multiple threads to shared resources. Without synchronization, it is not possible for a thread to access a shared object or resource while another thread is already using or updating that object’s value.
Q.66. Why has the String class been made immutable in Java?
Answer. The String class is immutable to achieve performance & thread-safety in Java.
1. Performance: Immutable objects are ideal for representing values of abstract data types like numbers, enumerated types, etc. Suppose, if the Strings were made mutable, then string pooling would not be possible because changing the String with one reference will lead to the wrong value for the other references.
2. Thread safety: Immutable objects are inherently threaded safe as we cannot modify once created. We can only use them as read-only objects. We can easily share them among multiple threads for better scalability.
Q.67. What are the differences between C++ and Java?
Answer. Both C++ and Java are similar and Object-Oriented and use almost similar syntax but there are many differences between them. The differences between C++ and Java are:
Q.68. What are finally and finalize in Java?
Answer. The finally block is used with a try-catch block that we put the code we always want to get executed even if the execution is thrown by the try-catch block. The finally block is just used to release the resources which were created by the try block.
The finalize() method is a special method of the Object class that we can override in our classes. The garbage collector calls the finalize() method to collect the garbage value when the object is getting it. We generally override this method to release the system resources when garbage value is collected from the object.
Q.69. What is Type Casting in Java?
Answer. There are some cases when we assign a value of one data type to the different data types and these two data types might not be compatible with each other. They may need conversion. If data types are compatible with each other, for example, Java does the automatic conversion of int value to long and there is no need for typecasting. But there is a need to typecast if data types are not compatible with each other.
Syntax
dataType variableName = (dataType) variableToConvert;
Q.70. What happens when an exception is thrown by the main method?
Answer. When the main() method throws an exception then Java Runtime terminates the program and prints the exception message and stack trace in the system console.
Q.71. Explain the types of constructors in Java?
Answer. There are two types of Java constructors based on the parameters passed in the constructors:
Default Constructor: The default constructor is a non-parameterized constructor that does not accept any value. The default constructor mainly initializes the instance variable with the default values. We can also use it to perform some useful task on object creation. A compiler implicitly invokes a default constructor if there is no constructor defined in the class.
Parameterized Constructor: The parameterized constructor is the constructor with arguments and one which can initialize the instance variables with the given values. We can say that the parameterized constructors are the constructors that can accept the arguments.
Q.72. Why does Java not support pointers?
Answer. The pointer is a variable that refers to some memory address. Java does not support pointers because they are unsafe, unsecured, and complex to understand. The goal of Java is to make it simple to learn and understand and also a secure language, therefore Java avoids the use of such complex and unsafe concepts.
Q.73. What is the String Pool?
Answer. The string pool is the reserved memory in the heap memory area. It is mainly used to store the strings. The main advantage of the String pool is whenever we create a string literal, JVM first checks it in the “string constant pool”. If the string is already present in the pool, then it returns a reference to the pooled instance. If the string is not present in the pool, then it creates a new String and places it in the pool. This saves memory by avoiding duplicate values.
Java Basic Programs for Interview
Now, it’s time to move towards Java interview programs, there are few popular Java codes which are frequently asked in the interviews. We recommend you to practice them while reading.
Q.74. What is the toString() method in Java?
Answer. String is an important topic during any Java interview, usually, interviewers ask multiple java string interview questions.
The toString() method in Java is used to return the string representation of an object. The compiler internally invokes the toString() method on the object when you print any object. So we can get the desired output by overriding the toString() method. We can return the values of an object by overriding the toString() method of the Object class. So, there is no need to write much code.
Consider the following example.
class Student { int rollno; String name; Student(int rollno, String name) { this.rollno = rollno; this.name = name; } public String toString() { //overriding the toString() method return rollno + " " + name + " ; } public static void main(String args[]) { Student str1 = new Student(101," Sneha”); Student str2 = new Student(102, "Raj”); System.out.println(str1); //compiler writes here str1.toString() System.out.println(str2); //compiler writes here str2.toString() } } "
Output:
101 Sneha
102 Raj
Q.75. Write a program to count the number of words in a string?
Answer. The following program counts the number of words in a String:
public class Test { public static void main(String args[]) { String str = "I am enjoying learning Java"; String words[] = str.split(" "); System.out.println("The number of words in the given string are: " + words.length); } }
Output:
The number of words in the given string is: 5
Q.76. What are the advantages of Java inner classes?
Answer. The advantages of Java inner classes are:
- Nested classes show a special type of relationship and it can access all the data members and methods of the outer class including private members.
- Nested classes develop a more readable and maintainable code because they logically group classes and interfaces in one place only.
- Nested classes enable Code Optimization as they require less code to write.
Q.77. What are autoboxing and unboxing? When does it occur?
Answer. This is also a popular Java interview question. Autoboxing is the process of converting primitive data types to the respective wrapper class object, for example, int to Integer or char to Character. Unboxing is the reverse process of autoboxing, i.e., converting wrapper class objects to the primitive data types. For example, Integer to int or Character to char. Autoboxing and Unboxing occur automatically in Java. However, we can convert them explicitly by using valueOf() or xxxValue() methods.
It can occur whenever there is a need for a wrapper class object, but a primitive data type is present or vice versa. For example:
- Adding primitive data types into Collection like ArrayList Set, LinkedList, etc, in Java.
- When we need to create an object of parameterized classes, for example, ThreadLocal which expects Type.
- Java automatically converts primitive data types to wrapper class objects whenever required and another is provided in the method calling.
- When a primitive type is assigned to a wrapper object type.
Q.78. What is a Loop in Java? What are the three types of loops?
Answer. This is the most basic interview question that you must know mandatorily before attending any interviews. Looping is one of the most important concepts of programming that is used to implement a statement or a block of statements iteratively. There are three kinds of loops in Java, we will discuss them briefly:
a. for loops:
A for loop in Java is used to implement statements iteratively for a given number of times. We use for loops when the programmer needs to refer to the number of times to implement the statements. It consists of three statements in a single line: Initialization, test-condition, update statement. The syntax of for loop is:
for(Initialization; test-condition; update expression)
b. while Loops:
The while loop is used if we require certain statements to be implemented regularly until a condition is fulfilled. The condition gets tested before the implementation of statements in the while loop, therefore it is also called the entry controlled loop. The syntax of while loop is:
while(test-condition) { //statement/s }
c. do-while loops:
A do-while loop is the same while loop, the only difference is that in the do-while loop the condition is tested after the execution of statements. Thus in the do-while loop, statements are implemented at least once. These are also called exit controlled loops. The syntax of the do-while loop is:
do { //statements }while(test-condition)
Q.79. State the difference between the comparison done by equals method and == operator?
Answer. The difference between equals() method and == operator is the most frequently asked question. Equals() method compares the contents of two string objects and returns true if they both have the same value, whereas the == operator compares the two string objects references in Java. In the below example, equals() method returns true as the two string objects contain the same values. The == operator returns false as both the string objects are referencing to different objects:
public class Test { public static void main(String args[]) { String srt1 = “Hello World”; String str2 = “Hello World”; if (str1.equals(str2)) { System.out.println(“str1 and str2 are equal in values”); } if (str1 == str2) { //This condition is false System.out.println(“Both strings are referencing same object”); } else { // This condition is true System.out.println(“Both strings are referencing different objects”); } } }
Output:
str1 and str2 are equal in terms of values
Both strings are referencing different objects
Q.80. State the difference between error and an exception?
Answer. An error is an irrecoverable condition that occurs during the execution or runtime of the program. For example, OutOfMemory error. These are JVM errors and we can not repair or recover from them at runtime. On the other hand, Exceptions are conditions that occur because of wrong input given by the user or the bad illogical code written in the code, etc.
For example, FileNotFoundException is thrown if the specified file does not exist. Or, if there is a NullPointerException if we try to use a null reference. In most cases, it is possible to recover from an exception either by giving users feedback for entering proper values, or handling exceptions through various methods.
Q.81. What is an Infinite Loop? How an infinite loop is declared?
Answer. An infinite loop runs without any condition and runs infinitely without ending until we stop the execution. We can come out of an infinite by defining any breaking logic in the body of the statement blocks.
We can declare the Infinite loop as follows:
for (;;) { // Statements to execute // Add any loop breaking logic }
Q.82. How can you generate random numbers in Java?
Answer. In Java we can generate random numbers in two ways:
- Using Math.random() function, we can generate random numbers in the range of 0.1 and 1.0
- Using Random class in the java.util package.
Q.83. What is the System class?
Answer. It is a core class in Java. Since the class is final, we cannot override its behavior through inheritance. Neither can we instantiate this class since it doesn’t provide any public constructors. Hence, all of its methods are static.
Q.84. Explain various exceptions handling keywords in Java?
Answer. There are three important exception handling keywords in Java:
try:
If a code segment has chances of having an error, we pace it within a try block. When there is an exception, it is handled and caught by the catch block. There must be a catch or a final or both blocks after the try block.
catch:
Whenever there is an exception raised in the try block, it is handled in the catch block.
finally:
The finally block executes irrespective of the exception. We can place it either after try{} or after the catch {} block.
Q.85. Can we convert byte code into source code?
Answer. Yes, it is possible to convert byte code into the source code. A decompiler in Java is a computer program that works opposite from the compiler. It can convert back the byte code or the .class file into the source code or the .java file. There are many decompilers but the most widely used JD – Java Decompiler is available both as a stand-alone GUI program and as an Eclipse plug-in.
Q.86. State the basic difference between String, StringBuffer, and StringBuilder?
Answer.
- String class is immutable in Java, and this immutability provides security and performance.
- StringBuffer class is mutable, hence we can add strings to it, and when required, we can also convert to an immutable String using the toString() method.
- StringBuilder class is very similar to a StringBuffer, but StringBuffer has one disadvantage in terms of performance. This is because all of its public methods are synchronized for thread-safety.
- If thread-safety is required, use StringBuffer class, otherwise use StringBuilder.
Q.87. Distinguish between a unary, binary, and a ternary operator. Give examples.
Answer.
1. Unary Operator: A unary operator requires a single operand. Some unary operators in Java are: unary+, unary-, ++, –, sizeof, instanceof, etc.
2. Binary Operator: Binary operator works on two operands. Some binary operators in Java are:
- Addition(+)
- Subtraction(-)
- Multiplication(*)
- Division(/)
- Modulus(%)
- &&, || , etc.
3. Ternary Operator: Ternary operators require three operands to work upon. The conditional operator- ?: is a ternary operator in Java.
Q.88. State the rules of Operator Precedence in Java.
Answer. Operator Precedence Hierarchy in Java evaluates all the expressions. Operator Precedence Hierarchy establishes the rules that govern the order of evaluation of operands in an expression. The rules are:
Operators: (type), *, /, and the remainder or modulus operator(%) are evaluated before + and – operators.
Any expression in parenthesis {} is evaluated first.
The precedence of the assignment operator is lower than any of the arithmetic operators.
Q.89. What is a fall through in Java?
Answer. The “fall through” is the term used in the switch statement. It refers to the way in which the switch statement executes the various case sections. Every statement that follows the selected case executes until it encounters a break statement.
Q.90. Tell the difference between Call by Value and Call by Reference in Java.
Answer. In call by value, the function creates its own copy of the passed parameters. It copies the passed values in it. If there are any changes, they remain in the copy and no changes take place in the original data.
On the other hand, in call by reference, the called function or method receives the reference to the passed parameters and it accesses the original data through this reference. Any changes that take place are directly reflected in the original data.
Q.91. What are the different types of arrays in Java? Give examples of each.
Answer. Arrays are of two types:
1. Single dimensional arrays/one-dimensional arrays- These arrays are composed of finite homogeneous elements. This is the simplest form of arrays. We give it a name and refer to the elements by using subscripts or indices.
Declaring single dimensional arrays:
datatype arrayName[] = new datatype[size];
or
datatype[] arrayName = new datatype[size];
2. Multi-dimensional arrays- These arrays are composed of elements, each of which itself is an array. The two-dimensional arrays are the simplest form of multi-dimensional arrays. Java allows more than two dimensions. The exact limit of dimensions is decided by the compiler we use.
A two-dimensional array(2D array) is an array in which each element is itself a one-dimensional array. For example, an array arr[P][Q], is an array P by Q table with P rows and Q columns, containing P x Q elements.
Declaring two-dimensional arrays:
datatype arrayName[] = new datatype[rows][columns];
or
datatype [] [] = new datatype[rows][columns];
Q.92. What are keywords in Java? How many keywords are used in Java?
Answer. Keywords in Java are the reserved words that convey a special or particular meaning to the compiler. We cannot use the keywords as an identifier in a program. There are 51 keywords in Java. For example class, int, break, for, switch, abstract, etc.
Q.93. Differentiate between actual and formal parameters in Java?
Answer. The data necessary for the function to perform the task is sent as parameters. Parameters can be actual parameters or Formal Parameters.
The difference between Actual Parameters and Formal Parameters is that Actual Parameters are the values that are passed to the function when it is invoked while Formal Parameters are the variables defined by the function that receives values when the function is called.
Q.94. State the difference between a while and do-while statement in Java?
Answer. The while and do-while loop are the same but the difference is that in the do-while loop the loop executes for at least once. The while loop is the entry-controlled loop and the do-while loop is the exit- controlled loop.
Q.95. What is the PATH and CLASSPATH in Java?
Answer. PATH in Java is the environment variable in which we mention the locations of binaries files. Example: We add bin directory path of JDK or JRE, so that any binaries under the directory can be accessed directly without specifying absolute path. CLASSPATH is the path for Java applications where the classes you compiled will be available.
1. The path is an environment variable that the operating system uses to find the executable files. On the other hand, Classpath is an environment variable that a Java compiler uses to find the path of classes.
2. PATH is used for setting up an environment for the operating system. The Operating System will search in this PATH for executables. On the other hand, Classpath is nothing but setting up the environment for Java. Java will use it to find compiled classes.
3. Path refers to the system while classpath refers to the Developing Environment.
Q.96. What is a Singleton class and how can we create it?
Answer. A singleton class is a class that has only one object or an instance of the class at a time. The singleton class provides a global point of access to the object. If we talk about the practical applications of Singleton class, then Singleton patterns are used in logging, caches, thread pools, configuration settings, device driver objects.To design a singleton class, we have to:
- Mark the class’s constructor as private.
- Write a static method with a return type as an object of this singleton class. Here, we use the concept of Lazy initialization to write this static method.
Q.97. State the difference between Array and ArrayList in Java.
Answer. An Array is a data structure that has a fixed and static length, whereas ArrayList is a Collection in Java with a variable length. We can not change or modify the length of an array once we create it in Java. But, we can change the length of an ArrayList even after creation. It is not possible to store primitives in ArrayList. An ArrayList can only store objects. But, in an array there can be both primitives and objects in Java.
Q.98. What is object cloning in Java?
Answer. The term object cloning in Java refers to the way of creating an exact copy of an object. The clone() method of the Object class clones or creates a copy of an object. The class that wants its object to be cloned, must implement the java. lang. Cloneable interface. If the class does not implement this Cloneable interface, then the clone() method generates a CloneNotSupportedException.
There are two types of Object cloning in Java: – Deep Cloning and Shallow Cloning. By default, Java uses Shallow Cloning.
Q.99. Differentiate between java.util.Date and java.sql.Date in Java?
Answer. java.sql.Date just represents the date without time information whereas java.util.Date represents information of both Date and Time. This is the major difference why there is no direct mapping of java.util.Date to java.sql.Date.
Date class that belongs to util package of Java and has is a combination of date and time while Date class that belongs to SQL package represents only the Date part.
Precisely, the Date contains information of year, month, and day and the Time means hour, minute, and second information. The java.util.Date class contains all year, month, day, hour, minute, and second information, but the class java.sql.date only represents the year, month, and day.
Q.100. Compare recursion and iteration.
Answer. In iteration, the code is executed repeatedly using the same memory space. That is, the memory space allocated once is used for each pass of the loop.
On the other hand, in recursion, since it involves function calls at each step, fresh memory is allocated for each recursive call. For this reason, i.e., because of function call overheads, the recursive function runs than its iterative counterpart.
Conclusion
We have covered the top Java interview questions with answers for freshers. The key to success in the Java interview is going through as many questions as you can.
These questions are the most frequently asked questions for any fresher.
Did you like our efforts? If yes, please rate TechVidvan on Google. | https://techvidvan.com/tutorials/java-interview-questions-and-answers/ | CC-MAIN-2020-45 | refinedweb | 10,610 | 56.55 |
私はPSOC5LPにてSDカードを使用したプログラムを書いています。
emfileとFS.hを使用しています。
できる限り省電力にしたいため、SDカードの電源も制御しようとしています。
そのため4端子の3.3Vレギュレータを用いてsleep前にSDカードの電源を切っています。
また、SPIのCS端子もLOWに設定するようにプログラムを書きました。
#include <project.h>
#include <FS.h>
#include "stdio.h"
FS_FILE * pFile;
int main()
{
CyGlobalIntEnable; /* Enable global interrupts. */
CONS_Start();
FS_Init();
while(1){
SD_Power_Write(1);
emFile_SPI0_CS_Write(1);
pFile = FS_FOpen("\\data\\4.csv", "a");
if(pFile)
{
if(0 == FS_FClose(pFile))
{
CONS_PutString("File was closed\r\n");
}else
{
CONS_PutString("Failed to close\r\n");
}
}else{
CONS_PutString("Failed to write file\n");
}
SD_Power_Write(0);
emFile_SPI0_CS_Write(0);
CyDelay(5000);
}
return 0;
}
こちらのプログラム実行結果が下記の内容です。
Start
File was closed
Failed to write file
Failed to write file
Failed to write file
Failed to write file
Failed to write file
Failed to write file
電源を一度切るとその後はファイルを開くことができません。
電源を切る前に実行すべきメソッドがあるのでしょうか?
それともそもそも電源を切ってはならないのでしょうか?
追記:
回路図と使用部品を記します。
microSDソケット:
SD電源用4端子レギュレータ:
主電源DCDC(3.3V出力):
Hello,
I am using a time-stamp method of measuring a frequency of a signal. The signal is a square wave, and causes a DMA transfer to start when a rising edge is detected. There is a circular RAM buffer that holds the timestamps / counts of the timer each time the input signal has a rising edge. A function is called periodically to subtract the counts from each other to estimate the frequency.
I used the example project from the CONSULTRON which is posted here:
Everything works as expected when I use the "Fixed Function" timer block with 16-bits. However, when I try changing it to a 24 or 32-bit UDB timer block, the counts do not make sense. The least-significant-byte of the RAM buffer always has the same value for some reason. When I look at the debug window for the component, the count matches what is expected in the MEMORY viewer, but it seems like the value at that address is not being transferred via DMA?
I referenced AN61102 which shows how to transfer 32-bit values from 16-bit spokes. I added another DMA channel as an intermediate step, and from the intermediate step goes to the larger buffer.
I was using the following #define in the DMA configuration to retrieve the counts, which is 0x4000_6508: Timer_Count_COUNTER_LSB_PTR_8BIT
Dear.
I just inherited code that sets a reserved bit that does make a difference in a capsense measurement. When the reserved bit is set I get "proper behavior" . I did not set this bit in a test and the capsense was reading out raw data at 255 range when it should have been reading 0. To get this design to work this bit has to be set. What does this bit do and how in the world could of my predecessor known to set it????
This bit is bit 7 in CS_CR3 that is being set by the following code segment.
CS_CR3 |= 0x90; // Reference buffer to drive the analog global bus. Bit 7 is not used so why was it set? Reserved bits should not be set. Without it the raw capsense value reads 255 instead of 0// CS_CR3 |= 0x10; // Reference buffer to drive the analog global bus. Causes 255 to be read in raw value instead of 0 .
Then the capsense code below this.
Can someone please tell me what this means?
this happens when i add ist module and connect it to uart tx_en port
im really bad at this, just trying to follow a video lecture
hope someone helps me..
Hello.
Customer used CY8C5868LTI-LP039.
And they used EEPROM and DieTemp components.
When die temperature measurement is started, writing to EEPROM becomes fail.
If DieTemp_1_Start () is not called, EEPROM_1_Write () will succeed.
But EEPROM_1_Write () will fail if there is DieTemp_1_Start ().
Error status is CYRET_UNKNOWN.
Why is EEPROM_1_Write () an error?
And could you please let us know a method that EEPROM_1_Write () will succeed even if there is DieTemp_1_Start ().
Best Regards.
Yutaka Matsubara
Hi,
Can we just implement TPS25982 block diagram to PSoC5LP MCU. I am attaching the link for datasheet of TPS25982 and the functional block diagram
Page No:18
Please do let me know in full brief about the different blocks.
To all,
A question was asked on an earlier form post about supporting more than 8 digits on a 7-segment LED display.
Post: LED-Driver-with-10-commons-how-it-is-possible
It is possible to support up to 24 digits with with the Cypress/Infineon LED Segment and Matrix Driver (V1.10).
Using this driver, there are only up to 8 commons supported. To use this driver for more than 8 digits you need to allocate more segments per common. This is referred to on pages 30 and 31 of the datasheet.
For example to allocate 24 digits of 7-segments (really 8-segments) you need to do the following in the configuration for 8 commons and 24 segments. If you need 16 digits you can allocate 8 commons and 16 segments. For 15 digits, 5 commons and 24 segments.
In other words, let's say you need 'x' digits. Then you need a minimum of 'y' commons. ROUNDUP(y = x/3)
segments 0-7 are used for the first set of digits (1 to x/y) and segments 8-15 for the next set ((x/y)+1 to 2x/y) and (if needed) segments 16-23 for the last set ((2x/y)+1 to x).
Each digit can be accessed in the API calls using the "position" variable.
The downside of sharing commons across digits is that it requires more current to be sourced or sunk. This can be achieved by using external components such as NPN and PNP transistors.
As a parting note: I have provided a new LED matrix driver component that can allocate up to 16 commons. This is a functional extension of the Cypress component. This component allows for separate commons drive for up to 16 digits if you need it. Additionally in a similar manner this new component can handle up to 48 7-segment digits.
The new component can be found at: Code-Examples/New-LED-Matrix-Driver-Component-Max-24-Segments-and-Max-16-Commons
Hi All,
I am working with IC cy8c29466, I just burn the code in IC and checked on hardware and I am getting ADC counts then I tried with another piece of IC and my ADC counts get variation near about 15-20 counts, I applied same input voltage, same code I burned, even the PCB is also same, every IC showing different different counts, the variation near about 15-20 counts, I don't understand what's wrong with it.
suppose an example at 0 millivolts ADC counts are 2 and at 500 millivolts ADC counts are 1060 for one IC
I burn the code in another cy8c29466 IC (with same PCB and same input voltages) and got the output at 0 millivolts ADC counts are 1 and at 500 millivolts ADC counts are 1047
I tried with 4 different IC every time I am getting different different counts at same voltage and same PCB, Even the ADC counts are stable and getting correct ratios with respect to input applied but due to the max. counts variation every IC have different ratio factor
I don't understand what's wrong with it, may be it is issue of tolerance of IC ? please suggest something how to resolve this issue ?
Thank You.
Does this device have input voltage ripple or spike regulations?
Ripple will occur up to 130mVp-p at 5V in low temperature conditions. LVD is set to 4.81V(Typ), but it has not been reset. (VM[2: 0] = 111b)
Thanks,Tetsuo
Expert II
Honored Contributor II
Honored Contributor
Esteemed Contributor
Valued Contributor II
Employee | https://community.cypress.com/t5/PSoC-5-3-1-MCU/bd-p/psoc135/page/777 | CC-MAIN-2021-10 | refinedweb | 1,253 | 71.24 |
Perl CGI
Is there a lot of Perl CGI scripting going on nowadays? It seems very cumbersome to me, but I've only written one application so far.
anon
Thursday, July 15, 2004
Funny, I find Perl CGI to be the most agile of many web scripting options. I usually fall back on it when it's taking to long to solve a problem in another language.
muppet
Thursday, July 15, 2004
Cumbersome how? Compared to what? I find it incredibly easy to build productive Web functionality quickly using Perl CGI, but that might just be a function of experience and familiarity.
John C.
Thursday, July 15, 2004
********* WARNING *********
These are my humble, humble
opinions, and are therefore
not flame bait!
***************************
>> "Cumbersome how? Compared to what?"
I'd rather use PHP or JSP or ASP or something like that. It seems really messy to me that my code has to generate the HTML. I'd rather just make an HTML file and fill in the blanks with the scripting.
But I'm new to web development!
Also, what's the big whoop with scripting languages? Does it really save people so much time not having to declare variables & types? I like strong typing. In the end it saves much more of my time.
anon
Thursday, July 15, 2004
CGI (in the traditional sense) just isn't a good platform to write whole web apps on, as the webserver has to launch a separate process for each CGI request.
So in that sense not many people are writing CGI apps, let alone Perl CGI ones. Rather using modules built into the webserver like mod_perl, mod_php4, mod_python etc. Or FastCGI which is subtly different to vanilla CGI. The only time I really see Perl CGI being used is things like formmail.pl and that really ancient message board script whose name I forget. Essentially slightly dated bolt-on interactive bits for static sites.
Perl CGI was the first language/platform a lot of us learned for web development back in the day - I really wouldn't choose to use it now though, and I don't see many others writing Perl for the web these days. Despite PHP's numerous shortcomings 99% of the things a simple web app needs to do are much easier to write and maintain in PHP, than Perl.
If you're looking for real elegance try one of the many Python-based web platforms. Or just use Python for CGI scripting if you're okay with the performance hit. Python is a lovely language to work with, much prettier more elegant and consistent than Perl or PHP.
Matt
Thursday, July 15, 2004
Matt-
PHP's many shortcomings....
-like?
I actually enjoy coding in PHP a great deal, and I've yet to encounter something I flat out cannot do with a combination of PHP and Javascript.
As for "having to generate my html", I actually enjoy the control of outputting HTML how and when I want to, and the ability to parse and reparse it before sending it off to the browser. I think Perl is very suited to this through regexps. You can use perl regexps in PHP, but the syntax is clumsy.
When I write PHP I still don't inline with HTML most of the time (I've only recently started experimenting with it at all), but rather build my output string progressively in code and then echo it when I'm good and ready.
PHP's shortcomings... hm.
(NB: I recognise some of these are fixed / improved in PHP5, or but barely anybody offers PHP5 hosting so I've not had the chance to work with it yet)
-Very inconsistently named builtin functions
-Separate sets of builtin functions for each database type
-Object oriented features are pathetic compared with any proper OO language. No multiple inheritance, interfaces, static methods, etc etc etc
-Essentially only one built-in data structure, which works as both a list and an associative array - some may see this as an advantage, I don't
-Lack of namespaces or modules
-Default behaviour is to copy objects rather than pass references, which can be hard/clumsy to get around, especially for the elements of an array when doing a foreach loop
-Generally gives the impression of being haphazardly designed if at all
Despite the above, though, it does make small-to-medium scale web development very easy, and I do tend to end up using it quite a bit.
Personally I find the lack of multiple inheritance a godsend. What a muddled, useless mess.
I like the data structures just fine, too.
I'll give you the badly named internal functions, and the lack of namespaces.
I know multiple inheritance can lead to a mess, but I feel I know how to use it responsibly and therefore am annoyed by languages which try to deny me the opportunity to use it.
Say if you have a class Foo, and you've written two orthogonal extensions to that class, Foo_with_X and Foo_with_Y. You then realise you also need a Foo_with_X_and_Y. With multiple inheritance in Python this sort of thing is a breeze using MixIn classes, but in PHP you either end up doing a lot of messy refactoring, or just copying and pasting the two pieces of code into a third class extending Foo (which is obviously a Bad Thing).
Actually what you guys are talking about just seems like a big mess.
The OP asked about Perl. As one other astute poster mentioned, it is agile. You can put something simple togehter quite quickly and something quite complicated is also possible. CPAN.
I've heard it said that Perl is easy and it is hard. I reckon if you came up from C it should be pretty simple. If you know PHP it should be simple. If you are an IDE programmer it might be hell on wheels. We got a java consultant who ... honest to god, is writing code in notepad. The other full-time java people say he doesn't know their IDE or can't get it setup or something and can't use his so he's just using notepad. Oh my god. How in the world could a programmer ever consider to use notepad. You can get both vi or emacs (my preference) for win32.
And really, what's better than this:
print qq|
"FirstName" <b>$firstname</b>
<h2>$middle_names_exist</h2>
|;
as opposed to
echo "\"FirstName\"<b><?=$firstname?></b>
<h2><?=$middle_names_exist?></h2>
";
Ah ... give me Perl ANY day. of course you could do this in PHP
print <<<HTML
"FirstName" <b>$firstname</b>
<h2>$middle_names_exist</h2>
HTML;
But YOU NEVER see anyone use the heredoc .. nope, 99% of PHP programmers create the most muddled mess you've ever seen. Some people just like to make the simple difficult.
Of course I use TT in Perl and Smarty in PHP so my progamming is just DATA MANIPULATION ... I get requests, I manipulate data and I return data structures which are then handled by TT or Smarty which SHOULD be handled by someone in charge of the presentation. In this way I use my code (especially in Perl) for both web, cron, cli applications, cus, need I say, I use MODULES and callers.
Oh, big, btw, I used mod_perl, MOD_PERL when I did Perl for web apps in a big way. Talk about power and control and yet ease of development.
Oh well. I could join the rest of my group in Java. I'm the legacy guy. Everyone else is moving forward and wasting a helluva lot of time not getting much done but they are using cool IDE's and personally I think I'd go crazy having to fit into the straight-jackets they are using.
me
Thursday, July 15, 2004
In IIS you can use Perl in ASP.
double_dark
Thursday, July 15, 2004
Ehm... OK! not really sure what your point was there, but still.
I dislike Perl for the usual kinds of reasons people dislike Perl, which I won't bore you with here. I'm not a massive PHP fan either but I do find it simpler to work with than Perl, when writing simple database-driven web apps. Just a matter of personal preference really. If I had my way I'd use Python all the time, but sadly it's rarely an option.
And for reference, I do use heredocs in PHP when appropriate, and I don't use IDEs for this kind of thing, just a good text editor.
I agree that Perl is quite possibly more flexible/agile than PHP, especially if you like using regexps for everything, but the language as a whole just isn't to my taste. For web development PHP does the job and makes it relatively easy, but for more elegance power and flexibility Python all the way :)
Cumbersome, is always the word for the first app/script in any language.
MyNameIsSecret();
Thursday, July 15, 2004
Matt,
The whole expense of starting a new process for each CGI instance is largely a myth. Technically you do have to pay that price, but at least on modern UNIX that price is very small. I put PHP (using mod_php4) head to head against a CGI program written in C, both performing the exact same function. If the processing was very simple, PHP and C were identical in speed, although the system load was about half for the C program as compared to the PHP program. If the processing was complex, say involving a template engine, the C program won.
I didn't do extensive research into the cause of this, but my guess is that opening the file from disk is the most expensive operation. Reading is fairly rapid, and process creation is very rapid.
The conclusion that I came to was that while PHP is really excellent for rapid development, if the application is heavily used the compiled application is probably a better choice. The aforementioned FastCGI would probably be worth looking at too.
Six Apart () is making decent money writing Perl CGI applications, so it isn't completely dead. The accounting package I use, SQL Ledger, is also written as Perl CGI.
Clay Dowling
Thursday, July 15, 2004
Perl is a great choice for creating web applications. You don't have to use strings to build up the HTML and then print it. There are a lot of HTML/XML templating "engines" out there to help you with that. There are also quite a few projects that allow you to intermingle perl code with HTML, just like PHP. Although, I wouldn't recommend going that route. Just because PHP LETS you intermingle your code with the presentation layer (html), doesn't mean you should.
saberworks
Thursday, July 15, 2004
saber-
I had the same attitude but I've found some reasonable success intermingling php with presentation code in order to build my templates. IE, instead of creating templates with a made-up markup language for invoking my objects and such, I just use simple PHP code blocks. It saves development effort and hence, time (and sanity).
When I want to change my presentation, I just write a brand new template and use the same PHP code fragments as my markup "tags", and the backend functions just the same, with a totally different front end layout and style.
It works well for me, but perhaps I'm not explaining well. There's no real LOGIC going on in my template, just brief and easy-to-understand (due to naming conventions) method calls to invoke particular content at point X in my template.
Perl would incur more overhead then C cgi, but I tend to agree that PHP much more scalable than cgi as it is touted. Amazon, Cisco and a bunch of others use cgi or variations thereof.
.net, the equivalent of MS Bob.
Thursday, July 15, 2004
should read php is not as much more scalable than it is being touted
mod_perl seems to beat the pants off of PHP for performance (and versatility, too), but setting up your framework is a considerable effort.
Clay - I wouldn't be too surprised that something written in C is faster than PHP, even as a CGI app. To make a fair comparison you should compare the C compiled CGI binary against a C compiled apache extension, or compare PHP run using mod_php4 against PHP run with CGI.
I love Perl. I've only seen the smallest snippets of PHP. Perl does everything I need it to do, so why change?
I never did get into the FastCGI stuff prefering to write my own template engine etc, but that's just me. It may not be pretty, it may not be what you'd do, but it works, and works well.
Jack of all
Thursday, July 15, 2004
Ever heard of PerlEx? It claims to improve Perl's performance on webservers... I've never really tried it, but it might be of interest to you.
irc
Friday, July 16, 2004 is a fairly large Perl CGI application. And I prefer it to all the PHP Wiki's I've seen.
MugsGame
Friday, July 16, 2004
First of all a note to anon: Perl has similar code-embedded-in-HTML technologies to PHP/ASP/JSP. Google or search CPAN for HTML::Mason, Apache::ASP, Embperl, ePerl, HTML::Merge. If that's how you like to write your web applications you perfectly can. That's not a problem.
As for the original question, there is quite a lot of web-scripting going on in Perl nowadays. Perhaps not as much, or not relatively as much as what used to be, due to competition from PHP, Python, Java, etc. but there still is.
In fact, I like to write my web applications in Perl. They can run fast if you're using mod_perl, and there are plenty of useful modules and frameworks available on CPAN. It's also cross-platform, open-source, very powerful, has a nice and active culture behind it, and very fun to program in.
Note that it is possible that other similar technologies have some or all of the advantages of the above.
BTW, there are several concentrated links to critiques of PHP here:
(in regards to what Matt said).
Shlomi Fish
Friday, July 16, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware5/163559.html | CC-MAIN-2018-51 | refinedweb | 2,406 | 70.94 |
I have an array of strings. I want to wrap each string in quotation marks—that is, to prepend and append a quotation mark to each string—unless the string matches a certain pattern. There are many ways to do this, but I want to know why I'm having trouble doing it with map. Here is some example code:
my @strings = qw(boy bird FALSE);
@strings = map { unless (/FALSE/) { "\"$_\"" }} @strings;
[download]
my @strings = qw(boy bird FALSE);
@strings = map { unless (/FALSE/) { "\"$_\"" }} @strings;
[download]
I want this to return
"boy" "bird" FALSE
but it returns
"boy" "bird" 1
and I can't quite figure out why. The 1 appears to come from scalar evaluation of unless (/FALSE/), but why? It seems that $_ is being set to 1 at some point, but I don't see why that should be, either.
In the case of "FALSE", the block is returning the value of the last expression evaluated. That was /FALSE/, which is true (i.e., 1).
What you might want to do is something like this:
@strings = map { /FALSE/ ? $_ : qq{"$_"} } @strings;
[download]
Map puts the return from the code block in the array, not a modified version of $_. This is a common gotcha especialy if you want to run a regex on $_ and return the result.
use Data::Dumper;
my @test = qw/hello world/;
print Dumper( map { s/o// } @test ), "\n";
print Dumper( map { s/o//; $_ } @test ), "\n";
[download]
Outputs:
$VAR1 = 1;
$VAR2 = 1;
$VAR1 = 'hell';
$VAR2 = 'wrld';
[download]
Notice how the first returns that same rouge 1 like you are getting. So you just have to make sure that the last statement in your map code block is returning the value you want to insert into the array.
Being somewhat map challenged, I didn't really grasp what's happening until I tried it this way. Maybe this will help to clarify for someone:
my @strings = qw(boy FALSE bird);
@strings = map { unless (/FALSE/) { "\"$_\"" }else{$_}} @strings;
[download]
$ perl -l
use strict;
use warnings;
my @strings = qw(boy bird FALSE);
@strings = map { qq{"$_"} } grep( !/FALSE/, @strings);
print join (q{,}, @strings);
__END__
"boy","bird"
$
[download]
The problem with this is that the OP said that the desired result is a list that includes every element in the original list. Some will be transformed and some won't.
Another problem is more subtle. When you grep first and then map, you loop over the list twice. If it's a very long list, this will often be a waste of time. The only time I'd want to do it is if I expect grep is going to remove a lot of the list and/or the map's work is especially time consuming. In most other cases, it's better to make the decisions in the map only.
Even when you want to remove elements, you can have the map block return () for the elements to remove. The code you wrote could look like this:
my @strings = qw(boy bird FALSE);
@strings = map { /FALSE/ ? qq{"$_"} : () } @strings;
print join q{,}, @strings;
[download]
It does the same thing, but it's only one | http://www.perlmonks.org/index.pl?node_id=621475 | CC-MAIN-2015-48 | refinedweb | 531 | 76.45 |
A Python Protocol Abstraction Library For Arduino Firmata
Project description
PyMata
PyMata is a high performance, multi-threaded, non-blocking Python client for the Firmata Protocol that supports the complete StandardFirmata protocol.
A new version for Python 3.5, pymata_aio, can be found here.
The API can be viewed on the wiki
##Major features
- Implements the entire Firmata 2.4.1 protocol.
- Python 2.7+ and Python 3.4+ compatibility through a shared code set. (If you are running Python 3.4 on Linux, please see note below).
- Easy to use and intuitive API. You can view the PyMata API Documentation here or view in the Documentation/html directory.
- Custom support for stepper motors, Sonar Ping Devices (HC-SRO4), Piezo devices and Rotary Encoders.
- Wiring diagrams are provided for all examples in the examples directory.
- Digial and Analog Transient Signal Monitoring Via Data Latches:
- They provide "one-shot" notification when either a digital or analog pin meets a user defined threshold.
- Analog latches compare each data change to a user specified value.
- Comparison operators are <, >, <= and >=
- Digital latches compare a data change to either a high or low, specified by the user.
- Latches can easily be re-armed to detect the next transient data change.
- Latches can be either manually read or a callback can be associated with a latch for immediate notification.
- Optional callbacks provide asynchronous notification of data updates.
Callbacks
- Digital input pins.
- Analog input pins.
- Encoder changes.
- I2C read data changes.
- SONAR (HC-SR04) distance changes.
- Analog latch condition achieved.
- Digital latch condition achieved.
- Callbacks return data reports in a single list format.
- Polling methods and callbacks are available simultaneously and can be used in a mixed polled/callback environment.
- Callbacks return data in a single list.
The callback data return values
Control-C Signal Handler
Below is a sample Control-C signal handler that can be added to a PyMata Application. It suppresses exceptions being reported as a result of the user entering a Control-C to abort the application.
import sys import signal # followed by another imports your application requires # create a PyMata instance # set the COM port string specifically for your platform board = PyMata("/dev/ttyACM0") # signal handler function called when Control-C occurs def signal_handler(signal, frame): print('You pressed Ctrl+C!!!!') if board != None: board.reset() sys.exit(0) # listen for SIGINT signal.signal(signal.SIGINT, signal_handler) # Your Application Continues Below This Point
Misc
- Want to extend PyMata? See Our Instructables Article explaining how stepper motor support was added. Use it as a guide to customize PyMata for your own needs.
- Check Out Mr. Y's Blog Here for all the latest news!
Special Note For Linux Users Wishing to Use Python 3.5
pymata_aio is now available and for Python 3.5.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/PyMata/2.18/ | CC-MAIN-2019-51 | refinedweb | 488 | 51.44 |
SQL Where <> help with syntax
Discussion in 'ASP General' started by iam247@gmail with SQL "INSERT INTO " syntax, Jun 20, 2007, in forum: ASP .Net
- Replies:
- 6
- Views:
- 476
- Kevin Spencer
- Jun 25, 2007
SQL Reference, SQL Queries, SQL helpecoolone, Jan 3, 2008, in forum: .NET
- Replies:
- 0
- Views:
- 788
- ecoolone
- Jan 3, 2008
Syntax highligth with textile: Syntax+RedCloth ?gabriele renzi, Dec 30, 2005, in forum: Ruby
- Replies:
- 2
- Views:
- 230
- gabriele renzi
- Dec 31, 2005
[ANN] SqlStatement 1.0.0 - hide the syntax of SQL behind familiarruby syntaxKen Bloom, Oct 9, 2006, in forum: Ruby
- Replies:
- 3
- Views:
- 230
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby
- Replies:
- 9
- Views:
- 305
- Rick DeNatale
- Jul 25, 2007 | http://www.thecodingforums.com/threads/sql-where-help-with-syntax.799291/ | CC-MAIN-2014-49 | refinedweb | 135 | 66.57 |
This class allows a general X-STEP engine to run generic functions on any interface norm, in the same way. It includes the transfer operations. I.e. it gathers the already available general modules, the engine has just to know it. More...
#include <XSControl_Controller.hxx>
This class allows a general X-STEP engine to run generic functions on any interface norm, in the same way. It includes the transfer operations. I.e. it gathers the already available general modules, the engine has just to know it.
The important point is that a given X-STEP Controller is attached to a given couple made of an Interface Norm (such as IGES-5.1) and an application data model (CasCade Shapes for instance).
Finally, Controller can be gathered in a general dictionary then retrieved later by a general call (method Recorded)
It does not manage the produced data, but the Actors make the link between the norm and the application
Initializing with names <theLongName> is for the complete, official, long name <theShortName> is for the short name used for resources.
Returns the Actor for Read attached to the pair (norm,appli) It can be adapted for data of the input Model, as required Can be read from field then adapted with Model as required.
Reimplemented in IGESControl_Controller.
Returns the Actor for Write attached to the pair (norm,appli) Read from field. Can be redefined.
Records a Session Item, to be added for customisation of the Work Session. It must have a specific name. <setapplied> is used if is a GeneralModifier, to decide If set to true, will be applied to the hook list "send". Else, it is not applied to any hook list. Remark : this method is to be called at Create time, the recorded items will be used by Customise Warning : if <name> conflicts, the last recorded item is kept.
Records <me> is a general dictionary under Short and Long Names (see method Name)
Customises a WorkSession, by adding to it the recorded items (by AddSessionItem)
Reimplemented in STEPControl_Controller, and IGESControl_Controller.
Tells if a value of <modetrans> is a good value(within bounds) Actually only for shapes.
Returns recorded min and max values for modetrans (write) Actually only for shapes Returns True if bounds are set, False else (then, free value)
Returns the help line recorded for a value of modetrans empty if help not defined or not within bounds or if values are free.
Returns a name, as given when initializing : rsc = False (D) : True Name attached to the Norm (long name) rsc = True : Name of the resource set (i.e. short name)
Creates a new empty Model ready to receive data of the Norm Used to write data from Imagine to an interface file.
Implemented in STEPControl_Controller, and IGESControl_Controller.
Returns the Protocol attached to the Norm (from field)
Tells if a shape is valid for a transfer to a model Asks the ActorWrite (through a ShapeMapper)
Tells if <obj> (an application object) is a valid candidate for a transfer to a Model. By default, asks the ActorWrite if known (through a TransientMapper). Can be redefined.
Records <me> in a general dictionary under a name Error if <name> already used for another one.
Returns the Controller attached to a given name Returns a Null Handle if <name> is unknown.
Returns an item given its name to record in a Session If <name> is unknown, returns a Null Handle.
Sets mininum and maximum values for modetrans (write) Erases formerly recorded bounds and values Actually only for shape Then, for each value a little help can be attached.
Attaches a short line of help to a value of modetrans (write)
Changes names if a name is empty, the formerly set one remains Remark : Does not call Record or AutoRecord.
Records the name of a Static to be traced for a given use..
Reimplemented in STEPControl_Controller, and IGESControl_Controller.
Returns the SignType attached to the norm (from field)
Returns the WorkLibrary attached to the Norm. Remark that it has to be in phase with the Protocol (read from field) | https://dev.opencascade.org/doc/refman/html/class_x_s_control___controller.html | CC-MAIN-2022-27 | refinedweb | 675 | 61.26 |
Ansible is pursuing a strategy of having one code base that runs on both Python-2 and Python-3 because we want Ansible to be able to manage a wide variety of machines. Contributors to Ansible should be aware of the tips in this document so that they can write code that will run on the same versions of Python as the rest of Ansible.
Ansible can be divided into three overlapping pieces for the purposes of porting:
Much of the knowledge of porting code will be usable on all three of these pieces but there are some special considerations for some of it as well. Information that is generally applicable to all three places is located in the controller-side section.
In both controller side and module code,.
Most of the general tips for porting code to be used on both Python-2 and Python-3 applies to porting controller code. The best place to start learning to port code is Lennart Regebro’s book: Porting to Python 3.
The book describes several strategies for porting to Python 3. The one we’re using is to support Python-2 and Python-3 from a single code base
One of the most essential things to decide upon for porting code to Python-3 is starts throwing exceptions due to not knowing what encoding
the non-ASCII characters should be in.
Python-3 changes this behavior by making the separation between bytes (
bytes)
and text (
str) more strict. Python will throw an exception when
trying to combine and compare the two types. The programmer has to explicitly
convert from one type to the other to mix values from each.
This change makes it immediately apparent to the programmer when code is mixing the types inappropriately, rather than working until one of their users causes an exception by entering non-ASCII input. However, it forces the programmer to proactively define a strategy for working with strings in their program so that they don’t mix text and byte strings unintentionally..
This is a partial list of places where we have to convert to and from bytes. It’s not exhaustive but gives you an idea of where to watch for problems..
Use the following boilerplate code at the top of all controller-side modules)
In Python-2.x, octal literals could be specified as
0755. In Python-3,
octals must be specified as
0o755.3
Ansible modules are slightly harder to port than normal code from other projects. A lot of mocking has to go into unit testing an Ansible module so it’s harder to test that your porting has fixed everything or to to make sure that later commits haven’t regressed the Python-3 support.
There are a large number of modules in Ansible. Most of those are maintained by the Ansible community at large, not by a centralized team. To make life easier on them, it was decided not to break backwards compatibility by mandating that all strings inside of modules are text and converting between text and bytes at the borders; instead, we’re using a native string strategy for now.
Native strings refer to the type that Python uses when you specify a bare string literal:
"This is a native string"
In Python-2, these are byte strings. In Python-3 these are text strings. The module_utils shipped with Ansible attempts to accept native strings as input to its functions and emit native strings as their output. Modules should be coded to expect bytes on Python-2 and text on Python-3.
Until Ansible-2.4, modules needed to be compatible with Python-2.4 as well. Python-2.4 did not understand the new exception-catching syntax so we had to write a compatibility function that could work with both Python-2 and Python-3. You may still see this used in some modules:
from ansible.module_utils.pycompat24 import get_exception try: a = 2/0 except ValueError: e = get_exception() module.fail_json(msg="Tried to divide by zero: %s" % e)
Unless a change is going to be backported to Ansible-2.3, you should not have to use this in new code.
Before Ansible-2.4, modules had to be compatible with Python-2.4. Python-2.4 did not understand the new syntax for octal literals so we used the following workaround to specify octal values:
# Can't use 0755 on Python-3 and can't use 0o755 on Python-2.4 EXECUTABLE_PERMS = int('0755', 8)
Unless a change is going to be backported to Ansible-2.3, you should not have to use this in new code.
module_utils code is largely like module code. However, some pieces of it are used by the controller as well. Because of this, it needs to be usable with the controller’s assumptions. This is most notable in the string strategy.
Module_utils must use the Native String Strategy. Functions in module_utils receive either text strings or byte strings and may emit either the same type as they were given or the native string for the Python version they are run on depending on which makes the most sense for that function. Functions which return strings must document whether they return text, byte, or native strings. Module-utils functions are therefore often very defensive in nature, converting from potential text or bytes at the beginning of a function and converting to the native string type at the end. | https://docs.ansible.com/ansible/2.6/dev_guide/developing_python_3.html | CC-MAIN-2018-51 | refinedweb | 908 | 70.13 |
Namespaces are used to organize the classes. It helps to control the scope of methods and classes in larger .Net programming projects. In simpler words you can say that it provides a way to keep one set of names(like class names) different from other sets of names. The biggest advantage of using namespace is that the class names which are declared in one namespace will not clash with the same class names declared in another namespace. It is also referred as named group of classes having common features. The members of a namespace can be namespaces, interfaces, structures, and delegates.
Defining a Namespace
To define a namespace in C#, we will use the namespace keyword followed by the name of the namespace and curly braces containing the body of the namespace as follows:
Syntax:
namespace name_of_namespace { // Namespace (Nested Namespaces) // Classes // Interfaces // Structures // Delegates }
Example:
// defining the namespace name1 namespace name1 { // C1 is the class in the namespace name1 class C1 { // class code } }
Accessing the Members of Namespace
The members of a namespace are accessed by using dot(.) operator. A class in C# is fully known by its respective namespace.
Syntax:
[namespace_name].[member_name]
Note:
- Two classes with the same name can be created inside 2 different namespaces in a single program.
- Inside a namespace, no two classes can have the same name.
- In C#, the full name of the class starts from its namespace name followed by dot(.) operator and the class name, which is termed as the fully qualified name of the class.
Example:
Output:
Hello Geeks!
In the above example:
- In System.Console.WriteLine()” “System” is a namespace in which we have a class named “Console” whose method is “WriteLine()“.
- It is not necessary to keep each class in C# within Namespace but we do it to organize our code well.
- Here “.” is the delimiter used to separate the class name from the namespace and function name from the classname.
The using keyword
It is not actually practical to call the function or class(or you can say members of a namespace) every time by using its fully qualified name. In the above example, System.Console.WriteLine(“Hello Geeks!”); and first.Geeks_1.display(); are the fully qualified name. So C# provides a keyword “using” which help the user to avoid writing fully qualified names again and again. The user just has to mention the namespace name at the starting of the program and then he can easily avoid the use of fully qualified names.
Syntax:
using [namespace_name][.][sub-namespace_name];
In the above syntax, dot(.) is used to include subnamespace names in the program.
Example:
// predefined namespace name using System; // user-defined namespace name using name1 // namespace having subnamespace using System.Collections.Generic;
Program:
Output:
Hello Geeks!
Nested Namespaces
You can also define a namespace into another namespace which is termed as the nested namespace. To access the members of nested namespace user has to use the dot(.) operator.
For example, Generic is the nested namespace in the collections namespace as System.Collections.Generic
Syntax:
namespace name_of_namespace_1 { // Member declarations & definitions namespace name_of_namespace_2 { // Member declarations & definitions . . } }
Program:
Output:
Nested Namespace Constructor
Recommended Posts:
- C#- Nested loops
- C# - Infinite Loop
- C# - if else Statement
- C# - if Statement
- C# - continue Statement
- C# - Break statement
- Differences Between .NET Core and .NET Framework
- C# - Indexers Using String as an Index
- Different Types of HTML Helpers in ASP.NET MVC
- Difference Between .NET and ASP.NET Framework
- Difference Between Properties and Indexers in C#
- C# Coding Standards
- C# Program to Convert a Binary String to an Integer
- Basics of FileSt. | https://www.geeksforgeeks.org/c-sharp-namespaces/ | CC-MAIN-2020-45 | refinedweb | 595 | 56.05 |
I used auto-rig pro and didn’t like the results, so I want to undo everything it did. I’ve deleted the rig and vertex groups it created, but there’s still duplicates of every object in the outliner. However, Blender treats these duplicates as being the same thing as the originals, so if I delete one it also deletes the other. Also, moving them into the collection that holds the originals does nothing. What caused this and how do I undo it?
I believe that you are seeing your outliner in the scenes mode and not in the layer view mode.
In the scenes mode you can see this “duplicate” but its only the way the outliner presents the scene in this mode. They are not duplicates they are the representation of the objects you have in your scene in the OBJECTS category.
Change to layer view mode and everything will be ok. But i can be wrong since you did not post any screenshot to have a clue.
Change it here:
I get this when I have one object parented to another, but they’re not in the same collection.
The ghost is telling you that that object is a child of the one above it in the hierarchy, but it’s associated with a different collection.
I’ve updated my post with a picture. As you can see I’m in the view layer and yet two things are selected at the same time.
I’ve updated my post with a picture. As you can see, nothing has any parents, but yet two things are selected at the same time.
It’s simply the same object in two different collections. You can remove an object from a collection by right-clicking and selecting unlink (or by deleting the collection)
These couldn’t be duplicates because they had the same name. Blender uses a global namespace which means there cannot be conflicting names, even in different object hierarchies or in different collections.
Thanks. I tried unlink earlier, but it would delete the object from both collections. However, I tried unlink again on the object in the other collection and it suddenly worked properly. Very frustrating. I assume it has something to do with how the collections are nested and where the objects are in that hierarchy.
You’re welcome. I don’t know what could cause this issue, I tried replicating the hierarchy from your capture and it worked fine. That being said it’s an area (outliner and context menus) that’s been changing a lot since a couple Blender versions, and is bound to keep changing…
Could it be caused by importing a model from another program which has a different way to manage hierarchy?
I don’t see how an object would influence Blender’s behaviour when imported, but I could be mistaken
I don’t know, maybe if the origin program works with an object as being part of two collections or is it’s way of managing relations at some level, because, if you see there is a collection called “body”, that as I see includes all the meshes of the character, and there is this other “clothes” that refers to the clothing part of those meshes, so I have no idea how this could influence Blender behavior, but I see that it’s not just random, there is logic there, so I believe it should come from somewhere.
God only knows what Auto Rigger Pro did. It’s a complete black box to me.
Actually, @Blenderer gave the right answer. I made a test here and parenting objects in different collections does exactly that.
Don’t you have a previous save to revert to ?
I do have a previous save, but I wanted to know what happened for future reference. I actually figured it out. In the view layer, one of the object references is yellow and the other is YELLOW. Apparently one of those is your “real” selection. If you unlink the other one, it’ll delete the object. However, if you unlink the one you’ve correctly selected, it works fine. It’s easy to get them mixed up because they look almost exactly the same. So I just made sure to click on the one I wanted to delete and then unlinked precisely that one and it worked. Make sense? Thanks again for your help. I’ve already marked your earlier answer as the solution. I’ll heart it too for good measure.
Autorig pro does its own thing, I’m not surprised it does weird stuff in the background for whatever purpose… I guess you could ask the author if you’re curious about its inner workings (“lucky” on this forum) | https://blenderartists.org/t/what-are-these-fake-duplicates-in-the-outliner/1284406 | CC-MAIN-2021-43 | refinedweb | 794 | 69.92 |
Hi,
I am new to C programing.
I want a program like , if i given folder path . IT has to delete the all the subfolder and files.
Please help me on this.
Regards,
Raaj
Printable View
Hi,
I am new to C programing.
I want a program like , if i given folder path . IT has to delete the all the subfolder and files.
Please help me on this.
Regards,
Raaj
Why would you want this, and what work have you already done on such a program?
Hi ,
I dont knw C . So how can i prepare. Any way i want to genarate one exe file. By running that exe iwant to delete the files and folders.
Please help meon this OR give a code for that program.
Oh, so you want me to teach you C or do your work for you? I don't think so.
ok , do the program for me
If you really do not know C then why do you need to write such program? Which OS are you using?
i am using windows xp . my team lead requirement.
If the purpose is for you to "learn how you do this", then you should perhaps LEARN something, rather than ask someone else who already know how to do this how to do it.If the purpose is for you to "learn how you do this", then you should perhaps LEARN something, rather than ask someone else who already know how to do this how to do it.Code:
del foldername /s
I would suggest that you look at "how to find files and folders" (not quite sure of the exact name, but it should be fairly obvious) in the FAQ.
--
Mats
can you tell me one thing .
# include <windows.h>
what it is and how can i include this file into my code
When you ask "What it is", that's much harder to answer, because you may be asking:
"What does windows.h contain?" - to which the short answer is "Lot of definitions of functions, types and structures that are part of the Windows API". For more details, you need some book or such on the Windows API.
"How does it affect my program?" - it enables you to make use of Windows API functions.
Or many other questions, including ones that I couldn't even imagine...
--
Mats
I really don't see much point in creating this program in C, anyway you are probably not using the best language for it (esp since it is a completely new language and isn't quick to learn, and the program could be more easily developed as a batch script (see Batch files - Ask for user input. If you still want to program it in C, you might want to include:
Code:
include <stdio.h>
I see the next comment from your team lead being "where the hell are all my files!?"
Some notes on the use of the word "urgent" when asking for help.
Code:
#pragma comment( lib, "shell32.lib" )
#include <stdio.h>
#include <windows.h>
#include <shellapi.h>
BOOL DeleteDirectory(char *pSource)
{
SHFILEOPSTRUCT sh;
ZeroMemory(&sh, sizeof(SHFILEOPSTRUCT));
sh.hwnd = NULL;
sh.wFunc = FO_DELETE;
sh.pFrom = pSource;
sh.fFlags = FOF_NOCONFIRMATION | FOF_SILENT;
return !SHFileOperation( &sh );
}
int main(void)
{
if(DeleteDirectory("C:\\TEMP") == FALSE)
printf("Temp folderNOT deleted\n");
else printf("Temp folder deleted Successfully\n");
return 0;
} | http://cboard.cprogramming.com/c-programming/103233-delete-files-subfolder-using-c-programing-urgent-printable-thread.html | CC-MAIN-2015-32 | refinedweb | 560 | 84.57 |
sage.plot.plot3d.shapes2.Line() does not work in the cloud
The following code (straight from...) does not work in the SageMath Cloud, at least not for me:
from sage.plot.plot3d.shapes2 import Line Line([(i*math.sin(i), i*math.cos(i), i/3) for i in range(30)], arrow_head=True)
The result is a long error message ending with "TypeError: 0 is not JSON serializable". Am I doing something wrong?
I guess I can just use line3d() instead.... but maybe this error indicates something else is going on.
I think this is a SMC 3d renderer bug, as this works fine in the normal Sage notebook.
I've created a ticket:... | https://ask.sagemath.org/question/29546/sageplotplot3dshapes2line-does-not-work-in-the-cloud/ | CC-MAIN-2018-13 | refinedweb | 114 | 77.53 |
No Stinking Mutants
In the quest for programming nirvana developers are constantly trying to reduce complexities in their code. One source of confusion and complexity is mutation. This post is about the different faces of mutation and state change, and the ways that Clojure helps to alleviate the complexities surrounding them. This is not a comprehensive treatise by any means, but hopefully it serves as a survey.
disclosure: this was a rejected submission to PragPub’s Special Clojure issue (which was excellent BTW), so it’s much longer than I would have liked for my blog, and probably much more formal than I normally write.
A surfeit of mutation
The Java programming language allows one to create classes with publicly accessible fields:
package gfx; public class ThreeDee { public double x; public double y; public double z; }
This level of promiscuity in a Java class definition allows any other piece of code to directly manipulate the instance fields:
ThreeDee p = new ThreeDee(); p.x = 0.0; p.y = 1.0; p.z = 2.0;
Almost every Java book written will discourage public class fields in the name of tight-coupling and instead promote the use of getters and setters. From the perspective of mutation complexities however, there is very little difference between one and the other. The tangled web of mutation still exists.
While this example is extreme, Java’s model for mutation leads to a tightly coupled web of mutation that can make programs difficult to reason about, test, and change. There are better alternatives to this web of insanity, as I will discuss next.
Package local mutation
A more constrained model of mutation is one bounded by package access. Observe the following class definition:
package gfx; public class ThreeDee { double x; double y; double z; }
The class
ThreeDee now limits access to its fields to only the classes within the
gfx package. This is less of a problem for coupling because the assumption is such that you have more control over the code in a given package and can therefore adjust accordingly should the fields in
ThreeDee change. However, while the web of mutation has been shrunk, it is still complex within the
gfx package itself.
While certainly not a widespread phenomenon, you will on occasion encounter package-level mutable fields in Java source in the wild.
Class local mutation
Every Java book (and most OO books in general — for good reason IMO) written will espouse the virtues of data hiding encapsulation. This practice is useful not only in hiding implementation details, but also to hide mutation. For example, Google’s Guava library provides an
ImmutableList class that implements an immutable implementation of the classic list data-structure. An example usage is as below:
import com.google.common.collect.ImmutableList; ImmutableList lst = ImmutableList.of("servo", "joel", "crow"); System.out.println("lst is " + lst.toString()); System.out.println("REVERSING!"); lst.reverse(); System.out.println("lst is now " + lst.toString());
When running the code above, you’ll notice the following printed:
lst is [servo, joel, crow] REVERSING! lst is [servo, joel, crow]
The
ImmutableList class lives by its namesake and does not provide a mutable interface, in fact if you try to call the mutable bits of the
java.util.List interface a
java.lang.UnsupportedOperationException is thrown.
lst.remove(0); // java.lang.UnsupportedOperationException
The Guava library is designed to provide clean immutable collections1 for use in Java programs. However, that’s not to say that mutation isn’t there. Instead, the mutable bits are cleverly hidden away inside of the Guava classes. In the case of
ImmutableList there is a plain-old Java array holding the elements of the list hidden away from grubby little mutants.
Limiting mutation at the class boundary is a fairly nice way to develop Java classes, especially those requiring concurrent execution. That is, if a class is immutable from an external API perspective and its internals are thread-safe, then instances can be shared freely across thread boundaries.
However, there is still a problem. That is, Google’s Guava library is an amazing piece of programming, but it’s advantages can only be realized within the context of a system-wide convention for immutability. In other words, Java will not enforce immutability as a language feature — the onus is on us to enforce our own best-practices.
I don’t know about you, but I’ve found programming conventions and best-practices difficult to observe completely when left to my own devices.
This is where Clojure enters the fray.
Single points of mutation
Clojure is a programming language in the Lisp family of languages that eschews promiscuous mutation. The core libraries and data structures are geared toward immutability by default. In fact, Clojure’s data-structures provide most of the same functionality as Guava, except in Clojure these features are exposed and enforced at the language level. Therefore, the problem of an adherence to convention is simplified vastly. However, let’s be realistic; sometimes mutation is needed. In the case where mutation really does seem like a good fit for any given problem at hand, Clojure provides multiple solutions.
Reference types
Recall the image denoting the aforementioned web of mutation:
Clojure’s model of mutation attempts to simplify the model from the chaos above by distilling the points of mutation into as few points of evil as possible; preferably one:
Clojure offers a set of reference types that provide a mutation model centered on singular points of mutation. References types can be viewed as containers for values — where values are things that cannot be changed like the number
9 or the immutable vector
[1 2 3]. Clojure therefore allows mutation only at the boundary of the reference type under very specific semantic constraints. The precise mutation semantics of each reference type are beyond the scope of this article,2 but common among each is that the mutation occurs as the result of a function call given the reference type’s current value.
Atoms
The simplest of Clojure’s mutable reference types is the Atom. Simply put, the Atom implements thread-safe compare-and-swap logic for mutation.
(def TIME (atom 0))
The Atom
TIME when created will hold the value
0. To get at the value inside Clojure provides a dereferencing function
deref (note, the symbol
;=> denotes a function return value):
(deref TIME) ;=> 0
All of Clojure’s reference types adhere to a simple interface for retrieving their value using
deref (or the syntactic
@ operator, that does the same thing). To update the value in the Atom
TIME Clojure uses the
swap! function that itself takes the Atom and a update function that will be used to calculate a new value from the current value:
(swap! TIME inc) @TIME ;=> 1
You can also pass arguments to the update function (where applicable):
(swap! TIME + 100) @TIME ;=> 101
Internal to the
swap! function, the preceding will be executed as such:
- Get the current value of
TIMEfrom the Atom
- Calculate
(+ <current value> 100)
- Check if the value in the Atom is the same as before
- If it is, then set the new value to the calculated value
- Else Retry from step 1
Pretty simple no? It is when provided as a language-level feature, but to implement compare-and-swap correct and efficient in the context of a large codebase is a decidedly more complex task.
As mentioned, Atoms are but one mutable reference type provided by Clojure. I encourage you to explore the other offerings available: Refs, Vars, and Agents.
Function local mutation
You can also restrict mutation to occur only within the confines of a single function.3 Observe a naive implementation of
zencat that uses an internal array to build the return vector:
(defn zencat2 [x y] (let [sz-x (count x) sz-y (count y) ary (make-array Object (+ sz-x sz-y))] (dotimes [i sz-x] (aset ary i (nth x i))) (dotimes [i sz-y] (aset ary (+ i sz-x) (nth y i))) (vec ary)))
Aside from being highly inefficient, the function
zencat2 is filled to the brim with mutation. However, the mutable array
ary never escapes the scope of the
zencat2 function. Instead, it is converted to an instance of Clojure’s immutable vector. There will come a time when you may need to implement an algorithm that requires mutation either for speed or the sake of clarity. That’s OK, Clojure will help you to hide the necessary mutations in a lovely veneer.
Immutable locals
The function
zencat2 used a mutable array instance to build a longer sequence from the concatenation of two other sequences. This fact gave the illusion of mutable locals, but in reality locals in Clojure are immutable by default. This fact can be daunting if you come from a language where mutation runs rampant. The natural question that arises when one realizes the fact of immutable locals is, “how do you keep local state?” As it turns out, the answer has already been revealed in the implementation of
zencat. That is, in languages like Clojure and Erlang (to name only two) that do not have mutable function locals, the way to “change” the values of locals is through recursion. In the case of
zencat, the value of the
loop locals
src and
ret are changed to their new values on each invocation of the
recur (implementing tail-recursion) statement. Therefore, it is possible, and in most cases preferable, to eliminate mutability entirely in Clojure. Observe the following:
(defn zencat3 [x y] (if (seq y) (recur (conj x (first y)) (next y)) x))
The implementation of
zencat3 contains not a single mutation! The manipulation of immutable data-structures is really Clojure’s strong points, and in fact is highly optimized to support such manipulations as idiomatic. The majority of the Clojure code that you will find in the wild will look similar to
zencat3, but there is still unneeded complexity in its implementation because as we know, recursion is a low-level operation.
Point-free
While the surface area of mutation has been eliminated with the implementation of
zencat3, there is still complexity in that you need to reason through the different values that
x and
y take as the recursive invocations execute. What if you could eliminate
x and
y completely? As it turns out, you can. Functional programming fosters another style referred to as “point free” style. In a nutshell, point-free refers to the act of composing existing functions in such a way that their arguments are never explicitly listed. An implementation of
zencat4 using point-free style would work as follows:
(def zencat4 (comp vec concat))
The
comp function is a highly useful tool for composing functions in a point-free style. Simply put,
comp returns a function that is the composition of the functions
vec and
concat. In other words,
comp effectively returns the following:
(fn [x y] (vec (concat x y)))
However, rather than cluttering the implementation of
zencat4 with references to locals
x and
y, we can read the point-free implementation as; make a vector out of the concatenation of the arguments to
zencat4. Point-free style is not always the best approach, but in some cases it can truly provide elegant solutions, and from my perspective represents the functional programming ideal:
- Build independent, generalized, state-free functions
- Plug them together via composition
This approach also works from the consumer’s perspective. Whereas, for any desired function
Z composed of functions
A . B, one can understand
Z simply by first understanding
B, followed by
A.4 Point-free style is the capstone realization of true reusability, but it only works well when a programming language provides powerful abstractions — a separate topic for another day.
Clojure has it all
We’ve travelled a twisty passage through the different manifestations of mutation. Starting with a wide-open mutation strategy and ending on implementation strategy without a mutation in sight, nor even a local spied. Clojure is an opinionated language with regards to mutation, but as we saw, its idioms allow for a wide range of mutation styles. While it would be beautiful and pure to disallow mutation in any of its forms, Clojure is a practical language and therefore provides the tools for mutation should requirements dictate the need for them. The title of this post is provocative (and evocative for others) in the spirit of fun. Neither myself, nor does the Clojure ideal deny the need for mutability. Clojure provides support for a wide spectrum of options for mutation, it’s up to us to use its offerings wisely.
:F
I highly recommend reading the source code of the Guava library. It and Clojure’s own immutable data structure implementations are masterful. Another high-quality codebase is Functional Java. ↩
Unfortunately I need to punt on a deeper examination of how each of Clojure’s reference types simplify mutation through uniform interfaces, change as the application of a pure function, validators, and transactions. Such a topic would warrant a multi-post series IMO. ↩
Clojure’s reference types are not a panacea for the complexities in designing concurrent systems. One should be mindful of designing concurrent systems, and in fact, the simplest threaded model is composed of a single thread. Clojure provides a feature called transients that provide a way to perform (potentially) faster data structure operations by using underlying mutable strategies that assume that the mutation occurs in a single thread only. The interface for transient manipulation looks very similar to that of the normal structure functions, as seen in my zencat gist. The
zencatfunction builds a transient return vector that is the concatenation of two vectors
xand
yprovided as arguments. While not precisely in the same category as the other mutation patterns discussed, transients deserve some consideration nonetheless. ↩
Tony Morris espouses the virtues of composability more elegantly when discussing scalaz. The context is different, but the spirit is the same. ↩
6 Comments, Comment or Ping
Paul Stadig
Great article!
Only one minor nit to pick. It seems to me that you are implying (perhaps unintentionally) that Clojure’s reference types help with the “tangled web of mutation.”
I think Clojure’s reference types are more about the insanity of multi-threaded programming. Using reference types you are better able to reason about correctness, but you can still create a tangled web of mutation.
Do you agree that tangled webs of mutation are orthogonal to reasoning about multi-threaded correctness, or am I off base?
Paul
Jul 12th, 2011
Ben Ellis
@Paul
I think you’re on to something. You could use a reference (or an atom, or, on a single thread, a var) and make code that is just as full of mutation and misery as the diagrams given above. You would just be fighting against the language design as you did it.
Clojure’s preference for immutable data is its most advantageous aspect for avoiding confusing mutable state. References, vars, and atoms are ways of dealing with mutable state when you need something mutable.
References give you mutable state, and safe multithreaded that allows (via transaction) to enforce invariants on multiple, related bits of mutable state.
Jul 12th, 2011
fogus
@Paul
That is more than a minor nitpick as it’s definitely a glaring hole in the article. That is, I could have (and should have) gone into some discussion about minimizing the mutation, change as the application of a pure function, validators, and transactions, but I didn’t. Unfortunately for now I will punt (see my notes for punting) and just say, “this is an article for another day” since it probably warrants its own treatment. :F
Jul 12th, 2011
Paul Stadig
I guess what I was getting at was that the tangled web of mutation is more a function of encapsulation and coupling, whereas the reference types attack the problem of concurrency.
You can have a concurrently safe, well reasoned program that still inappropriately touches mutable data in lots of different namespaces (i.e. the “tangled web of mutation”).
Jul 12th, 2011
Shmoo
Hi, Fogus,
Your article was biased and inconsistent, hence poor.
Despite saying that, “The title of the post is … in the spirit of fun,” your repeated straw-manning was fit only for the school-yard.
Your repeated mentioning of, “Mutation insanity,” made your point manfully strongly, but collapsed with the embarrassing admission, “Neither myself, nor does the Clojure ideal deny the need for mutability.”
Can you see how skewed your article is?
Jul 13th, 2011
fogus
@Shmoo
I thought it was rather balanced — the joke’s on me I suppose.
Jul 13th, 2011
Reply to “No Stinking Mutants” | http://blog.fogus.me/2011/07/12/no-stinking-mutants/ | CC-MAIN-2018-34 | refinedweb | 2,774 | 51.28 |
My received more than 100 issues and pull requests on GitHub. Many excited developers in the open source community gave valuable feedback that helped to improve the stability and expand the features of this SDK.
Changes and Additions
Here are some additions we’ve made since our experimental release:
- Full service coverage parity with the rest of the SDKs.
- Visual Studio 2015 support.
- OS X El Capitan support.
- Presigned URL support.
- Expansion of and improvements to the Amazon S3 TransferClient.
- Inline documentation improvements.
- More integration for custom memory management.
- Forward-compatible enumeration support.
- Improvements to our CMake exports to simplify consumer builds.
- Unicode support.
- Several service client fixes and improvements.
- Ability to build only the clients you need.
- Custom signed regions and endpoints.
- Common Crypto support for Apple platforms (OpenSSL is no longer required on iOS and OS X).
- Several stability updates related to multi-threading in our Curl interface on Unix and Linux.
- The Service Client Generator is now open sourced and integrated into the build process.
Also, NSURL support for Apple platforms will be committed within a week or so. After that, Curl will no longer be required on iOS or OS X.
The team would like to to thank those who have been involved in improving this SDK over the past six months. Please continue contributing and leaving feedback on our GitHub Issues page.
Before we move to General Availability, we would like to receive another round of feedback to help us pin down the API with a stable 1.0 release. If you are a C++ developer, please feel free to give this new SDK a try and let us know what you think.
In Other News
Here are a few other things that you may find interesting:
- We have moved our GitHub repository from the awslabs organization to aws/aws-sdk-cpp .
- We are now providing new releases for new services and features with the rest of the AWS SDKs.
- We now have aC++ developer blog. We’ll post tutorials and samples there throughout the year. We’ll also announce improvements and features there, so stay tuned!
- We will distribute pre-built binaries for our most popular platforms in the near future. We’ll let you know when they go live.
Sample Code
Here is some sample code that writes some data to a Kinesis stream and then consumes the data:
#include <aws/kinesis/model/PutRecordsRequest.h> #include <aws/kinesis/KinesisClient.h> #include <aws/core/utils/Outcome.h> using namespace Aws::Utils; using namespace Aws::Kinesis; using namespace Aws::Kinesis::Model; class KinesisProducer { public: KinesisProducer(const Aws::String& streamName, const Aws::String& partition) : m_partition(partition), m_streamName(streamName) {} void StreamData(const Aws::Vector& annoucement1("AWS SDK for C++"); Aws::String annoucement2("Is Now in Developer Preview"); producer.StreamData( { ByteBuffer((unsigned char*)annoucement1.c_str(), annoucement1.length()), ByteBuffer((unsigned char*)annoucement2.c_str(), annoucement2.length()) }); std::this_thread::sleep_for(std::chrono::milliseconds(5)); } return 0; }
— Jonathan Henson , Software Development Engineer (S » Developer Preview of AWS SDK for C++ is Now Available
评论 抢沙发 | http://www.shellsec.com/news/2291.html | CC-MAIN-2017-04 | refinedweb | 503 | 50.63 |
Simple, Pythonic text processing. Sentiment analysis, POS tagging, noun phrase parsing, and more.
Project description
TextBlob
Simplified text processing for Python 2 and 3.
Requirements
- Python >= 2.6 or >= 3.1
Installation
There are two options for installing textblob:
- Option 1 includes the a bundled version of NLTK (the latest from the Github master branch). Though this option is quicker, this will override your local NLTK installation if you have one. If this concerns you, then prefer Option 2, or use textblob in a virtualenv.
- Option 2 does not include NLTK, so you will have to install the latest version manually.
Instructions for both options are below.
If you don’t have pip (you should), run this first: curl | python
Option 1: With bundled NLTK
pip install textblob curl | python
This will install textblob and download the necessary NLTK corpora.
Option 2: Install textblob and NLTK separately
pip install git+ pip install git+ curl | python
This will install the latest NLTK from the master branch, as well as the latest version of textblob from the no-bundle branch.
Usage
Simple.
Create a TextBlob
from text.blob import TextBlob wikitext = ''' Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C. ''' wiki = TextBlob(wikitext)
Sentiment analysis
The sentiment property returns a tuple of the form (polarity, subjectivity) where polarity ranges from -1.0 to 1.0 and subjectivity ranges from 0.0 to 1.0.
testimonial = TextBlob("Textblob is amazingly simple to use. What great fun!") testimonial.sentiment # (0.4583333333333333, 0.4357142857142857)
Tokenization
zen = TextBlob("Beautiful is better than ugly. " "Explicit is better than implicit. " "Simple is better than complex.") zen.words # WordList(['Beautiful', 'is', 'better'...]) zen.sentences # [Sentence('Beautiful is better than ugly.'), # Sentence('Explicit is better than implicit.'), # ...] for sentence in zen.sentences: print(sentence.sentiment)
Words and inflection
Each word in TextBlob.words or Sentence.words is a Word object (a subclass of unicode) with useful methods, e.g. for word inflection.
sentence = TextBlob('Use 4 spaces per indentation level.') sentence.words # OUT: WordList(['Use', '4', 'spaces', 'per', 'indentation', 'level']) sentence.words[2].singularize() # OUT: 'space' sentence.words[-1].pluralize() # OUT: 'levels'
Get word and noun phrase frequencies
wiki.word_counts['its'] # 2 (not case-sensitive by default) wiki.words.count('its') # Same thing wiki.words.count('its', case_sensitive=True) # 1 wiki.noun_phrases.count('code readability') # 1
TextBlobs are like Python strings!
zen[0:19] # TextBlob("Beautiful is better") zen.upper() # TextBlob("BEAUTIFUL IS BETTER THAN UGLY...") zen.find("Simple") # 65 zen.sentences: print(sentence) # Beautiful is better than ugly print("---- Starts at index {}, Ends at index {}"\ .format(sentence.start, sentence.end)) # 0, 30
Get a JSON-serialized version of the blob
zen.json # '[{"sentiment": [0.2166666666666667, ' '0.8333333333333334], # "stripped": "beautiful is better than ugly", ' # '"noun_phrases": ["beautiful"], "raw": "Beautiful is better than ugly. ", ' # '"end_index": 30, "start_index": 0} # ...]'
Advanced usage
Noun Phrase Chunkers
TextBlob currently has two noun phrases chunker implementations, text.np_extractors.FastNPExtractor (default, based on Shlomi Babluki’s implementation from this blog post) and text.blob import TextBlob from text.np_extractors import ConllExtractor extractor = ConllExtractor() blob = TextBlob("Extract my noun phrases.", np_extractor=extractor) blob.noun_phrases # This will use the Conll2000 noun phrase extractor
TextBlob currently has two POS tagger implementations, located in text.taggers. The default is the PatternTagger which uses the same implementation as the excellent pattern library.
The second implementation is NLTKTagger which uses NLTK’s TreeBank tagger. It requires numpy and only works on Python 2.
Similar to the noun phrase chunkers, you can explicitly specify which POS tagger to use by passing a tagger instance to the constructor.
from text.blob import TextBlob from text.taggers import NLTKTagger nltk_tagger = NLTKTagger() blob = TextBlob("Tag! You're It!", pos_tagger=nltk_tagger) blob.pos_tags
Testing
Run
python run_tests.py
to run all tests.
Changelog
0.3.9 (unreleased)
- Updated nltk.
- ConllExtractor is now Python 3-compatible.
- Improved sentiment analysis.
- Blobs are equal (with ==) to their string counterparts.
- Added instructions to install textblob without nltk bundled.. | https://pypi.org/project/textblob/0.3.8/ | CC-MAIN-2019-43 | refinedweb | 685 | 53.78 |
I need someo #A Python text-RPG
#A Jak Production
#APOC
global ammo
global health
global lives
global exp
global food
ammo=55
health = 100
lives=10
exp = 0
food = 30
def part1(): print "50 Days After The Outbreak:You are standing outside of the Empire State Building." print "Vines, plants, dirt, and grime cover its once-regal surfaces. Huh." print "I guess if 80% of the world is dead, more people are concerned about survival than sightseeing.God." print "Generally,us survivors tend to band together. Mostly it is for good. Not the bandits." print " Bandit:'Hey you! What you got in that bag?" print "I recognized this Bandit as Sam Cabelo! He was the janitor at my office building. Not the nicest fellow." answer = raw_input("Type 'show' or 'run away' then hit the 'Enter' button.") if answer == "SHOW" or answer == "Show" or answer == "show": print "Ahhh. Nice .45 you got there. And some food.Hand it over.(He says this like a threat, reinforced by that revolver in his hand" answer2 = raw-input("Type either Hand it over or flee") if answer2 == "HAND IT OVER" or answer2 == "Hand it over" or answer2 == "hand it over" print "Bandit: Good Job.. Go on now set food = 25 set ammo = 40 return answer3 if answer2 == "FLEE" or answer2 == "Flee" or answer2 == "flee" print "He shot you" set lives = 9 else: print "TYPE SOMETHING CORRECTLY" return answer 2 elif answer == "run away" or "Run Away" or "RUN AWAY": print "He shot you... hee hee hee" print "When the input comes up again, type a differet answer" else: print "You didn't type Show or run away." part1() part1() | https://www.daniweb.com/programming/software-development/threads/432579/can-someone-correct-my-game-s-code | CC-MAIN-2017-17 | refinedweb | 276 | 82.85 |
Python Beginner to Expert/Structured Python< Python Beginner to Expert
Starting the shellEdit
As suggested in the introduction, Python was conceived as a bridge between a command shell and an application development language, so it's important to learn how to use the Python command shell. The Python shell can be invoked by opening a terminal window and entering the word python at the prompt. If more than one version of Python is installed, the version number may be required as part of the command, for example python3.2. Selecting from versions may vary depending on the OS used and the method of installation.
Starting python3.2 on an Ubuntu Linux system will create the following messages on the terminal window.
Python 3.2.2 (default, Sep 5 2011, 21:17:14)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
The opening message shows the Python version, some build information, and four commands that may be used to get more information about python. license() provides a brief history of python including release years of major version and detailed information about the python distribution and license agreement. credits() provides a brief message of thanks to organizations and individuals who have supported python. copyright() provides a short list of copyrights that pertain to the Python. help() will execute the python help subsystem, which offers detailed information about python functions primarily derived from help strings.
The Python help() command will be discussed in detail later in this tutorial in a separate section.
Python's default prompt command prompt is >>>. Python displays a cursor to the right of the >>> prompt and python commands should be entered there.
To exit the shell and return to the system prompt, type exit() or Ctrl-D.
The shell itself does not offer much functionality to the user other than to provide a command line for entering python commands and functions. Editing capabilities in the shell are extremely limited.
Commands entered into the shell may be recalled by scrolling backward and forward through the command history using the up arrow and down arrow key. The left and right arrow keys are useful to reposition the cursor while editing the current line. The backspace key is destructive and will delete the character to the left of the cursor. It is possible to toggle between insert and overwrite modes by pressing the insert key. The delete key works like the backspace key except that it deletes the character at the cursor rather than the character to the left of the cursor. The home key repositions the cursor to the leftmost character on the current line while the end key repositions the cursor to the end of the line. Pressing enter will attempt to execute the current line no matter where the cursor is positioned. Note that pressing enter will not break the current at the cursor.
The shell maintains a partial history of commands that have been entered starting each time the shell is invoked. The history is not preserved between sessions.
The PageUp selects the first command in the history list for editing while the PageDown key performs the complementary function editing the last command entered.
Most python users will have little use for the command shell interpreter. Probably it's most common use is as a calculator. Arithmetic expressions entered at the command line will be evaluated immediately and the result displayed. The result of the most recent calculation is assigned to the underscore character. The following shows the use of the underscore character.
>>> 2+2
4
>>> _
4
>>> _*_
16
Basic Arithmetic with PythonEdit
Python has two basic numeric types: int (integer) and float (floating point). A python integer can be of any length. Python integers do not have a minimum or maximum (lower or upper) limit although naturally processing speed is reduced when dealing with very long integers. At the command line, Python will recognize that a number is an int if it is entered with no decimal point and that it is a float if a decimal point is included. Python will recognize that 1 is an int while 1.0 is a float. A real can be cast to an int and vice versa like this: float(1) casts an integer to a float, while int(1.1) casts an int to a float.
Python 3.2.2 for 32-bit Windows provides the following information regarding the numeric floating point type:)
Python offers the following well known and understood symbols for arithmetic operations.
Operation Symbol
Exponentiation **
Modulus %; Multiplication *; Division / (equal precedence)
Addition +; Subtraction - (equal precedence)
Python observes typical precedence for these operators in the order shown. Calculations enclosed between parentheses have the highest precedence. When precedence of operations is not dictated by operator precedence or grouping by parentheses, calculations are performed from left to right.
Contents
Exercise:Edit
1. Start the python interpreter. Note the version number that is displayed on the welcome screen. Exit the shell. Restart the shell. Try entering the different commands that are listed in the welcome message (help, copyright, credits, license). Try some calculations. Try dividing two integers that you would expect would produce an fractional result (for example, 5/2). Note the result. Try various combinations of operations to convince yourself that the order of precedence is predictable. Be sure to experiment with parentheses. Enter 2*(5+ at the command line. What happens when you don't match each left parenthesis with a right parenthesis?
2. Consider other ways to start the python command line interpreter. Consider which method you favor and why.
IDLEEdit
Python scripts may be created in any text editor with a couple of caveats. 1. It is generally advisiable to include a shebang line when writing for Unix types of systems. 2. In some Unix type environments, the interpreter may have problems with DOS type line endings.
A full installation of Python includes an IDE called IDLE. The name IDLE looks as if it is an acronym for something but like Python itself, it is a name borrowed from Monty Python, in this case the last name of Monty Python regular Eric Idle. IDLE was written by Python progenitor's, Guido von Rossum.
IDLE is included with most or all versions of Python available from the official python website, Python.org. It is often not included with other alternate versions of Python, such as IronPython and ActivePython.
You can start IDLE from the shell or command window, however the preferred method would be to start IDLE via your OS's GUI menu. By default, IDLE open's two Windows: the editor and the shell.
The IDLE shell resembles the command interpreter shell but provides significant additional functionality. Before discussing the additional functionality, note that we lost one of the handier features of the command interpreter shell, the capaibility to scroll and forwards through history list of commands with the up and down arrows keys. In the IDLE shell, if you want to edit re-execute a previously issued command you can use Alt-p and Alt-n to scroll through the recent command list. You can also find the previously issued command in the window, position the cursor on it, and press the enter key. Unlike the command shell interpreter, IDLE keeps a history of all commands in the shell from the beginning of the session until the shell is closed.
Both IDLE shell and the IDLE editor support the mouse and scroll bars for navigating and editing content in their Windows.
The following is quoted verbatims from the IDLE README
IDLE or patches, let us know about them by using the Python issue tracker: For further details and links, read the Help files and check the IDLE home page at There is a mail list for IDLE: idle-dev@python.org. You can join at
This may prove sufficient introduction for many users. The IDLE help menu offers addition instruction information regarding uses and features of IDLE's menus and options and the entire contents of the help are reproduced here.
[See the end of this file for ** TIPS ** on using IDLE !!] Click on the dotted line at the top of a menu to "tear it off": a separate window containing the menu is created. File Menu: New Window -- Create a new editing window Open... -- Open an existing file Recent Files... -- Open a list of recent files Open Module... -- Open an existing module (searches sys.path) Class Browser -- Show classes and methods in current file Path Browser -- Show sys.path directories, modules, classes and methods --- Save -- Save current window to the associated file (unsaved windows have a * before and after the window title) Save As... -- Save current window to new file, which becomes the associated file Save Copy As... -- Save current window to different file without changing the associated file --- Print Window -- Print the current window --- Close -- Close current window (asks to save if unsaved) Exit -- Close all windows, quit (asks to save if unsaved) Edit Menu: Undo -- Undo last change to current window (A maximum of 1000 changes may be undone) Redo -- Redo last undone change to current window --- Cut -- Copy a selection into system-wide clipboard, then delete the selection Copy -- Copy selection into system-wide clipboard Paste -- Insert system-wide clipboard into window Select All -- Select the entire contents of the edit buffer --- Find... -- Open a search dialog box with many options Find Again -- Repeat last search Find Selection -- Search for the string in the selection Find in Files... -- Open a search dialog box for searching files Replace... -- Open a search-and-replace dialog box Go to Line -- Ask for a line number and show that line Show Calltip -- Open a small window with function param hints Show Completions -- Open a scroll window allowing selection keywords and attributes. (see '*TIPS*', below) Show Parens -- Highlight the surrounding parenthesis Expand Word -- Expand the word you have typed to match another word in the same buffer; repeat to get a different expansion Format Menu (only in Edit window): Indent Region -- Shift selected lines right 4 spaces Dedent Region -- Shift selected lines left 4 spaces Comment Out Region -- Insert ## in front of selected lines Uncomment Region -- Remove leading # or ## from selected lines Tabify Region -- Turns *leading* stretches of spaces into tabs (Note: We recommend using 4 space blocks to indent Python code.) Untabify Region -- Turn *all* tabs into the right number of spaces New Indent Width... -- Open dialog to change indent width Format Paragraph -- Reformat the current blank-line-separated paragraph Run Menu (only in Edit window): Python Shell -- Open or wake up the Python shell window --- Check Module -- Run a syntax check on the module Run Module -- Execute the current file in the __main__ namespace Shell Menu (only in Shell window): View Last Restart -- Scroll the shell window to the last restart Restart Shell -- Restart the interpreter with a fresh environment Debug Menu (only in Shell window): Go to File/Line -- look around the insert point for a filename and linenumber, open the file, and show the line Debugger (toggle) -- Run commands in the shell under the debugger Stack Viewer -- Show the stack traceback of the last exception Auto-open Stack Viewer (toggle) -- Open stack viewer on traceback Options Menu: Configure IDLE -- Open a configuration dialog. Fonts, indentation, keybindings, and color themes may be altered. Startup Preferences may be set, and Additional Help Sources can be specified. On MacOS X this menu is not present, use menu 'IDLE -> Preferences...' instead. --- Code Context -- Open a pane at the top of the edit window which shows the block context of the section of code which is scrolling off the top or the window. (Not present in Shell window.) Windows Menu: Zoom Height -- toggles the window between configured size and maximum height. --- The rest of this menu lists the names of all open windows; select one to bring it to the foreground (deiconifying it if necessary). Help Menu: About IDLE -- Version, copyright, license, credits IDLE Readme -- Background discussion and change details --- IDLE Help -- Display this file Python Docs -- Access local Python documentation, if installed. Otherwise, access. --- (Additional Help Sources may be added here) ** TIPS ** ========== Additional Help Sources: Windows users can Google on zopeshelf.chm to access Zope help files in the Windows help format. The Additional Help Sources feature of the configuration GUI supports .chm, along with any other filetypes supported by your browser. Supply a Menu Item title, and enter the location in the Help File Path slot of the New Help Source dialog. Use http:// and/or www. to identify external URLs, or download the file and browse for its path on your machine using the Browse button. All users can access the extensive sources of help, including tutorials, available at. Selected URLs can be added or removed from the Help menu at any time using Configure IDLE. Basic editing and navigation: Backspace deletes char to the left; DEL deletes char to the right. Control-backspace deletes word left, Control-DEL deletes word right. Arrow keys and Page Up/Down move around. Control-left/right Arrow moves by words in a strange but useful way. Home/End go to begin/end of line. Control-Home/End go to begin/end of file. Some useful Emacs bindings are inherited from Tcl/Tk: Control-a beginning of line Control-e end of line Control-k kill line (but doesn't put it in clipboard) Control-l center window around the insertion point Standard Windows bindings may work on that platform. Keybindings are selected in the Settings Dialog, look there. Automatic indentation:. (N.B. Currently tabs are restricted to four spaces due to Tcl/Tk issues.) See also the indent/dedent region commands in the edit menu. Completions: Completions are supplied for functions, classes, and attributes of classes, both built-in and user-defined. Completions are also provided for filenames. The AutoCompleteWindow (ACW) will open after a predefined delay (default is two seconds) after a '.' or (in a string) an os.sep is typed. If after one of those characters (plus zero or more other characters) you type a Tab the ACW will open immediately if a possible continuation is found. If there is only one possible completion for the characters entered, a Tab will supply that completion without opening the ACW. 'Show Completions' will force open a completions window. In an empty string, this will contain the files in the current directory. On a blank line, it will contain the built-in and user-defined functions and classes in the current name spaces, plus any modules imported. If some characters have been entered, the ACW will attempt to be more specific. If string of characters is typed, the ACW selection will jump to the entry most closely matching those characters. Entering a Tab will cause the longest non-ambiguous match to be entered in the Edit window or Shell. Two Tabs in a row will supply the current ACW selection, as will Return or a double click. Cursor keys, Page Up/Down, mouse selection, and the scrollwheel all operate on the ACW. 'Hidden' attributes can be accessed by typing the beginning of hidden name after a '.'. e.g. '_'. This allows access to modules with '__all__' set, or to class-private attributes. Completions and the 'Expand Word' facility can save a lot of typing! Completions are currently limited to those in the namespaces. Names in an Edit window which are not via __main__. OTOH, you could make the delay zero. You could also switch off the CallTips extension. (We will be adding a delay to the call tip window.) Python Shell window: Control-c interrupts executing command. Control-d sends end-of-file; closes window if typed at >>> prompt (this is Control-z on Windows). Command history: Alt-p retrieves previous command matching what you have typed. Alt-n retrieves next. (These are Control-p, Control-n on the Mac) Return while cursor is on a previous command retrieves that command. Expand word is also useful to reduce typing. Syntax colors: The coloring is applied in a background "thread", so you may occasionally see uncolorized text. To change the color scheme, use the Configure IDLE / Highlighting dialog. Python default syntax colors: Keywords orange Builtins royal purple Strings green Comments red Definitions blue Shell default colors: Console output brown stdout blue stderr red stdin black Other preferences: The font preferences, keybinding, and startup preferences can be changed using the Settings dialog. Command line usage: Enter idle -h at the command prompt to get a usage message. Running without a subprocess:. Extensions: IDLE contains an extension facility. See the beginning of config-extensions.def in the idlelib directory for further information. The default extensions are currently: FormatParagraph AutoExpand ZoomHeight ScriptBinding CallTips ParenMatch AutoComplete CodeContext
IDLE Exercises
Rather than try to elaborate on or reword the lucid explanations provided in the IDLE documentation, what follows is just a series of exercises that are designed to familiarize the reader with menu items and features of the IDLE IDE.
1. Find and open IDLE from your OS's GUI menu. Try closing the editor. Note that closing the editor does not terminate the shell. Use the File|New Window menu item from the shell to open a new Editor window. Using the same menu item, determine whether you can have more than editor window open at a time.
2. Now close the shell. Reopen the shell by using the IDLE editor's Windows|*Python Shell* menu item.
3. Enter a short program in the shell, something like
for n in range(0,10): print(n)
Be sure to press enter after the colon at the end of the first line. Notice that IDLE indents automatically. What do you think triggers the identation?
4. Re-enter the line with deliberate mistakes. What happens?
5. Locate a valid copy of the code in the shell window. Using the mouse or arrows position the cursor (the thin blinking vertical bar) to the line reading for n in range(0,10): and press enter. Notice that the entire block of code is copied to the active prompt where you can edit it. Try copying the same code by selecting it with mouse and pressing Ctrl-C to copy and then paste at the current position. Consider which method is easier for you to use.
6. Copy the same code and paste in into the editor window. Check the script with the IDLE editor's Run|Check Module menu item. Try to run the script with the IDLE Editor's Run|Run Module menu item or with the F5 key. You should have found that in order to do either you need to save the script. There is a way to direct the IDLE editor to save script changes automatically. Where do you suppose that option is found? Be advised that IDLE does not create backup copies of your scripts but that it does keeps 1000 undo steps in memory. Consider methods that you might use to keep an external audit trail so that you revert to an earlier version of your code if necessary.
7. Save the code in the text editor if you have not already. Note whether you need to type the .py extension or whether the save dialog automatically does it for you.
8. Save the contents of the shell window. Open the saved contents in the IDLE editor. Can you think of any situation where saving the contents of the shell window might be useful?
9. Tear off a copy of any drop down menu by clicking on the dashed lined at the top of the menu. Close the floating copy of the drop down menu. Consider whether this is a useful feature and what you might use it for.
10. Compare the main menu items shown in the shell with those in the editor. Notice that each has seven menus but that the editor has a Format item where the shell has a Shell item.
11. Try out the various options from the IDLE editor's Format menu. You may cut and past the following snippet into your IDLE editor for practice:
pyramid = [3, 7, 4, 2, 4, 6, 8, 5, 9, 3] class node(object): value = 0 left = 0 right = 0 def __init__(self, v, l, r): self.value = v self.left = l self.right = r l = [] c = 0 r = 0 t = node for n in pyramid: t = node(n, 0, 0) l.append(t) for t in l: print(t.value, t.left, t.right) | https://en.m.wikibooks.org/wiki/Python_Beginner_to_Expert/Structured_Python | CC-MAIN-2017-47 | refinedweb | 3,425 | 63.29 |
This is the first time to use python and need to import htql
when i run this code:
import htql;
page="<a href=a.html>1</a><a href=b.html>2</a><a href=c.html>3</a>";
query="<a>:href,tx";
for url, text in htql.HTQL(page, query):
print(url, text);
The problem of the questioner is he followed installation instructions from HTQL PDF-manual named Hyper-Text Query Language COM Interface, which obviously describes library setup for COM (Component Object Model by Microsoft), proposing
regsvr32 <path-to-dll>. Intended to be used within win32-infrastructure aware applications first place
While there are means to use COM from within python (e.g. via pywin32 and others), it's not common method python script expects.
Proper solution is to follow instructions from HTQL home page, suggesting:
Windows binaries: Download the htql.zip and extract "htql.pyd" to the Python's DLLs directory, such as in 'C:\Python27\DLLs\' or 'C:\Python32\DLLs\'.
to install precompiled python .pyd module within python libraries search path | https://codedump.io/share/3Dt5Z7lVEBhf/1/win81-64-bit--python-273--import-htql-importerror-dll-load-failed-the-specified-module-could-not-be-found | CC-MAIN-2017-26 | refinedweb | 175 | 57.27 |
On Thu, 19 Oct 2000, Linus Torvalds wrote:....> I think you overlooked the fact that SHM mappings use the page cache, and> it's ok if such pages are dirty and writable - they will get written out> by the shm_swap() logic once there are no mappings active any more.> > I like the test per se, because I think it's correct for the "normal"> case of a private page, but I really think those two BUG()'s are not bugs> at all in general, and we should just remove the two tests.> > Comments? Anything I've overlooked?The primary reason I added the BUG was that if this is valid, it meansthat the pte has to be removed from the page tables first withpte_get_and_clear since it can be modified by the other CPU. Althoughthis may be safe for shm, I think it's very ugly and inconsistent. I'drather make the code transfer the dirty bit to the page struct so that we*know* there is no information loss.If the above is correct, then the following patch should do (untested). Oh, I think I missed adding pte_same in the generic pgtable.h macros, too.<doh!> I'm willing to take a closer look if you think it's needed. -bendiff -urN v2.4.0-test10-pre4/include/asm-generic/pgtable.h work-foo/include/asm-generic/pgtable.h--- v2.4.0-test10-pre4/include/asm-generic/pgtable.h Fri Oct 20 00:58:03 2000+++ work-foo/include/asm-generic/pgtable.h Fri Oct 20 01:42:24 2000@@ -38,4 +38,6 @@ set_pte(ptep, pte_mkdirty(old_pte)); } +#define pte_same(left,right) (pte_val(left) == pte_val(right))+ #endif /* _ASM_GENERIC_PGTABLE_H */diff -urN v2.4.0-test10-pre4/mm/vmscan.c work-foo/mm/vmscan.c--- v2.4.0-test10-pre4/mm/vmscan.c Fri Oct 20 00:58:04 2000+++ work-foo/mm/vmscan.c Fri Oct 20 01:43:54 2000@@ -87,6 +87,13 @@ if (TryLockPage the page already in the swap cache? If so, then * we can just drop our reference to it without doing@@ -98,10 +105,6 @@ if (PageSwapCache(page)) { entry.val = page->index; swap_duplicate(entry);- if (pte_dirty(pte))- BUG();- if (pte_write(pte))- BUG(); set_pte(page_table, swp_entry_to_pte(entry)); drop_pte: UnlockPage(page);@@ -111,13 +114,6 @@ page_cache_release it a clean page? Then it must be recoverable-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgPlease read the FAQ at | https://lkml.org/lkml/2000/10/20/104 | CC-MAIN-2019-43 | refinedweb | 413 | 72.56 |
I'm using the arcpy.da module with the updatecursor class to populate specific values for specific fields in several hundred shapefiles. I'm specifiying the fields of interest for the updatecursor in a list but the catch is that the fields in the list are not all present in every shapefile. So when the cursor trys to iterate through the fields of each shapefile and a field doesn't exist it throws an error saying the 'column is not specified'. I tried several ways of verifying if the fields exists but have had not luck. Any suggestions, see code below. Thanks
def field_vals(): env.workspace = 'some_wspace' field_list = ['field1', 'field2', 'field3', 'field4', 'field5'] val_list =[23,45,34,99,76] fcList = arcpy.ListFeatureClasses() for fc in fcList: with arcpy.da.UpdateCursor(fc,field_list) as cursor: for row in cursor: if row[4] == 0 and row[0] not in val_list and row[0] != 0: row[4] = row[1] elif row[4] == 0 and row[1] not in val_list and row[1] != 0: row[4] = row[2] elif row[4] == 0 and row[2] not in val_list and row[2] != 0: row[4] = row[3] cursor.updateRow(row) field_vals() | https://community.esri.com/thread/67890-arcpydaupdatecursor-with-list | CC-MAIN-2020-40 | refinedweb | 196 | 75.5 |
Less known bits of the Python Standard Library.
textwrap
This module has some functions for easily wrapping and indenting plain text. Its useful when you’re one of those weirdos that likes to wrap everything you print to the terminal at 80 characters. E.g.
>>> import textwrap
>>>.'
>>> for line in textwrap.wrap(text, 50):
... printprint
I’m surprised a lot of people don’t know this one. While developing in Python, you always end up doing a lot of print debugging. When dealing with more complicated data structures, like nested dictionaries, print’s output becomes unruly and that’s when pprint comes in:
>>> from pprint import pprint
>>> data = {
... 'name': 'Michael Audrey Meyers',
... 'birth_date': 'October 19, 1957',
... 'relatives' : [
... 'Donald Meyers',
... 'Edith Meyers',
... 'Judith Meyers',
... ],
... }
>>> print(data)
{'name': 'Michael Audrey Meyers', 'birth_date': 'October 19, 1957', 'relatives': ['Donald Meyers', 'Edith Meyers', 'Judith Meyers']}
>>> pprint(data)
{'birth_date': 'October 19, 1957',
'name': 'Michael Audrey Meyers',
'relatives': ['Donald Meyers', 'Edith Meyers', 'Judith Meyers']}
enum
Python has had type hinting for a while now — you knew that, right? Big companies are type hinting their code and so should you because the power of types can even make shitty languages less shitty. Before types, Python had already started moving in this direction with the enum module. It allows you to define a type as a set of predefined constants much like in other languages. Here’s an example from the documentation:
>>> from enum import Enum
>>> class Color(Enum):
... RED = 1
... GREEN = 2
... BLUE = 3
...
>>> print(Color.RED)
Color.RED
>>> print(Color.RED.name)
RED
>>> print(Color.RED.value)
1
shelve
When I want persist some data, without much fuss, I pickle whichever objects I want and write them to storage. If portability is an issue, I serialize the objects using json instead. An even simple (and less portable) alternative is to use shelve. You just instantiate a Shelf and use it like a dictionary. Dbm takes care of writing and reading your data to and from the disk. E.g.
>>> import shelve
>>> with shelve.open('default.db') as shelf:
... shelf['first_name'] = 'Vitor'
... shelf['last_name'] = 'Pereira'
...
>>> with shelve.open('default.db') as shelf:
... print(shelf['first_name'], shelf['last_name'])
...
Vitor Pereira
(Read the bit on
writebackin the documentation if you plan on making heavier use of this module)
That’s right, Python includes an RFC compliant parser and generator of email messages. I found out about this module while reading Gmail’s documentation. It also includes an SMTP client. Combining these two means you have a full fledged email client at your disposal. Here’s an example script from the documentation:
import smtplib
from email.message import EmailMessagetextfile = 'stored_email.txt'# Open the plain text file whose name is in textfile for reading.
with open(textfile) as fp:
# Create a text/plain message
msg = EmailMessage()
msg.set_content(fp.read())
msg['Subject'] = f'The contents of {textfile}'
msg['From'] = 'roger@hmail.com'
msg['To'] = 'tobias@imail.com'
# Send the message via our own SMTP server.
s = smtplib.SMTP('localhost')
s.send_message(msg)
s.quit()
winreg
(Windows exclusive!) Microsoft’s operating system includes an all purpose global key-value trash bag that software can use to store data it hates called the Windows Registry. This module’s API is pretty much a light wrapper around the native APIs so it’s not particularly pleasant to use, but hey, at least it’s an alternative to those ill-defined .REG files. | https://medium.com/@vmsp/less-known-bits-of-the-python-standard-library-46dc88490115 | CC-MAIN-2019-47 | refinedweb | 567 | 66.13 |
On Sat, May 07, 2011 at 01:35:54PM +0200, Michael Niedermayer wrote: > On Sat, May 07, 2011 at 09:56:31AM +0200, Reimar Döffinger wrote: > > --- > > libavutil/mem.c | 10 +++++++++- > > 1 files changed, 9 insertions(+), 1 deletions(-) > > > > diff --git a/libavutil/mem.c b/libavutil/mem.c > > index f0f18d1..134fcba 100644 > > --- a/libavutil/mem.c > > +++ b/libavutil/mem.c > > @@ -57,6 +57,8 @@ void free(void *ptr); > > > > #endif /* MALLOC_PREFIX */ > > > > +static const int dummy_alloc; > > + > > /* You can redefine av_malloc and av_free in your project to use your > > memory allocator. You do not need to suppress this file because the > > linker will do it automatically. */ > > @@ -72,7 +74,7 @@ void *av_malloc(size_t size) > > if(size > (INT_MAX-32) ) > > return NULL; > > else if(!size) > > - size= 1; > > + return (void *)&dummy_alloc; > > This violates ISO C malloc() semantics, as well as attribute(malloc) > semantics and its also not correctly aligned. > We might ignore ISO C as this isnt malloc(), the rest looks like a > possinble issue though > ignoring the attribute semantics means we have to remove > attribute(malloc) or risk undefined behavior on av_malloc(0) > the align should be easy to fix I have some doubts any of these really matter - the align certainly doesn't since using the returned pointer in any way would be a programming error. But we can just leave it, but in that case: could you please fix av_realloc to behave consistently? | http://ffmpeg.org/pipermail/ffmpeg-devel/2011-May/111501.html | CC-MAIN-2017-04 | refinedweb | 229 | 59.94 |
Minimal wrapper for Twitter's REST and Streaming APIs
Project description
This Python package supports Twitter’s REST and Streaming APIs (version 1.1) with OAuth 1.0 or OAuth 2.0. It works with the latest Python versions in both 2.x and 3.x branches.
Some Code Examples
[See TwitterAPI/cli.py and TwitterAPI/examples for more working examples.]
First, authenticate with your application credentials:
from TwitterAPI import TwitterAPI api = TwitterAPI(consumer_key, consumer_secret, access_token_key, access_token_secret)
Tweet something:
r = api.request('statuses/update', {'status':'This is a tweet!'}) print r.status_code
Get some tweets:
r = api.request('search/tweets', {'q':'pizza'}) for item in r: print item
Stream tweets from New York City:
r = api.request('statuses/filter', {'locations':'-74,40,-73,41'}) for item in r: print item
Notice that request() accepts both REST and Streaming API methods, and it takes two arguments: the Twitter method, and a dictionary of method parameters. In the above examples we use get_iterator() to get each tweet object. The iterator knows how to iterate both REST and Streaming API results. Alternatively, you have access to the response object returned by request(). From the response object r you can get the raw response with r.text or the HTTP status code with r.status_code. See the requests library documentation for more details.
Command-Line Usage (cli.py)
For syntax help:
python -u -m TwitterAPI.cli -h
You will need to supply your Twitter application OAuth credentials. The easiest option is to save them in TwitterAPI/credentials.txt. It is the default place where cli.py will look for them. You also may supply an alternative credentials file as a command-line argument.
Call any REST API endpoint:
python -u -m TwitterAPI.cli -endpoint statuses/update -parameters status='my tweet'
Another example (here using abbreviated option names) that parses selected output fields:
python -u -m TwitterAPI.cli -e search/tweets -p q=zzz count=10 -field screen_name text
Calling any Streaming API endpoint works too:
python -u -m TwitterAPI.cli -e statuses/filter -p track=zzz -f screen_name text
After the -field option you must supply one or more key names from the raw JSON response object. This will print values only for these keys. When the -field option is omitted cli.py prints the entire JSON response object.
Installation
From the command line:
pip install TwitterAPI
Documentation
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/TwitterAPI/2.3.3/ | CC-MAIN-2022-21 | refinedweb | 423 | 60.31 |
Content Count150
Joined
Last visited
About s4m_ur4i
- RankAdvanced Member
Contact Methods
- Website URLinstagram.com/s4m_ur4i
- s4m_ur4i
Profile Information
- GenderMale
- LocationGermany
-
- @Wandrinceen hope you like it Hey guys, new one right here: 800+ Tiles and fully animated!
-
Kristiyan reacted to a post in a topic: Game works better in Canvas than in WebGL
CarolynDenton reacted to a post in a topic: Great free and low-cost game graphics
-
hromoyDron reacted to a post in a topic: Game art-style Feedback appreciated.
ruslanfan reacted to a post in a topic: Game art-style Feedback appreciated.
-
- s4m_ur4i started following Great free and low-cost game graphics and Game art-style Feedback appreciated.
- Hey Guys, another one is online! you should check it out. all sets: twitter:
-
- HUGE release! 1500+ Sprites Please check it out!
-
-
-
Wolfsbane reacted to a post in a topic: Great free and low-cost game graphics
- Hey Guys, I just released a top-down zelda-like perspective Dungeon Tileset, with objects and monsters. please check it out!
- More Stuff! Check it out! 🎉
-
- HEY! It's again: FREE game asset day! haha I hope you enjoy! this is for the game jam:- (But you can use it even after it, or for something else ) credit is appreciated. -- If you like it, check out my other projects, I just released a big Metroidvania / roguelike/platformer Tileset: Everything is low cost because of Indies Check out everything:
- Hey Guys! Yesterday I released a free PICO-8 tileset, free for everyone to use, so grab it! If you like it, check out my other projects, I just released a big Metroidvania / roguelike / platformer Tileset: Everything is low cost because of Indies Check out everything:
problem with setScale on layers
s4m_ur4i replied to jopcode's topic in Phaser 3Hey, You mean the gaps between those tiles? If you don't have a moving camera, it can be that the graphics aren't 100%. note: If you later on plan to at a moving camera, I suggest, you create a margin around every tile in your graphics file. and add a bit more of the colors on the edges, because there will be small gaps on those graphics if the camera moves fast. Also tile culling doesn't fix this as of this threat:
Reading input from a keyevent
s4m_ur4i replied to GroundZeroCro's topic in Phaser 3Hey I'd like to share my controls class with you: Maybe it is helpful for your purpose Everytime const controls = new Controls -> if(controls.on('keyname')) is done, you get a true or false. class Controls { constructor(scene) { this.scene = scene; this.enabled = true; this.keyboard = this.scene.input.keyboard.createCursorKeys(); this.key = this.scene.input.keyboard.addKeys({ 'TAB': Phaser.Input.Keyboard.KeyCodes.TAB, 'ESC': Phaser.Input.Keyboard.KeyCodes.ESC, 'Space': Phaser.Input.Keyboard.KeyCodes.SPACE, 'X': Phaser.Input.Keyboard.KeyCodes.X, 'One': Phaser.Input.Keyboard.KeyCodes.ONE, 'Two': Phaser.Input.Keyboard.KeyCodes.TWO, 'F': Phaser.Input.Keyboard.KeyCodes.F, 'C': Phaser.Input.Keyboard.KeyCodes.C, 'W': Phaser.Input.Keyboard.KeyCodes.W, 'M': Phaser.Input.Keyboard.KeyCodes.M, 'I': Phaser.Input.Keyboard.KeyCodes.I, 'A': Phaser.Input.Keyboard.KeyCodes.A, 'S': Phaser.Input.Keyboard.KeyCodes.S, 'D': Phaser.Input.Keyboard.KeyCodes.D, 'E': Phaser.Input.Keyboard.KeyCodes.E, 'Enter': Phaser.Input.Keyboard.KeyCodes.ENTER }); } on(actionName) { switch (actionName) { case 'left': return (this.keyboard.left.isDown || this.key.A.isDown) && this.enabled; case 'right': return (this.keyboard.right.isDown || this.key.D.isDown) && this.enabled; case 'down': return (this.keyboard.down.isDown || this.key.S.isDown) && this.enabled; case 'up': return (this.keyboard.up.isDown || this.key.W.isDown) && this.enabled; case 'X': return this.key.X.isDown && this.enabled; case 'Y': return this.key.C.isDown && this.enabled; case 'M': return this.key.M.isDown && this.enabled; case 'I': return this.key.I.isDown && this.enabled; case 'jump': return (this.key.Space.isDown || this.key.F.isDown) && this.enabled; case 'B': return (this.key.E.isDown || this.key.Enter.isDown) && this.enabled; case 'LT': return this.key.One.isDown && this.enabled; case 'RT': return this.key.Two.isDown && this.enabled; case 'MENU': return this.key.ESC.isDown && this.enabled; } } } You can achieve the same with events, triggering a: this.scene.event.emit('key_<name>'); on a keypress every time a key is pressed.
Dynamic Text Size
s4m_ur4i replied to cruseyd's topic in Phaser 3For me, I choosed a higher Text size and just scaled it down (0.1) so you can use values below scale = 1 01. - 1.0 I think this is because the rendered text (scale 1.0) is threatened like a sprite. When scaling it up, it gets blurry too. By making the text just bigger by default and working with lower scale values, I did not see any pixelation. Hope that helped
Exclude graphics from Camera Shake
s4m_ur4i replied to wantafanta's topic in Phaser 3Personally, I found that using multiple scenes is a lot more comfortable for many things, like UI elements etc. because it can get messy if your game gets bigger. It kind of sorts it. And also is not affected by the camera of the other screen - as Rich already pointed out. The only thing is, that you have to get sure the new scene (if your game is bigger than the view) is aligned with the underlying scene.
How to align game in webpage
s4m_ur4i replied to wordplay's topic in Phaser 3add "rel=stylesheet" to your <link> element. Get sure that the path to your CSS-File is correct. The CSS seems not to be loaded as of the background of the text-boxes should be black. also: I think you need to give your canvas an ID and put it in the phaser game config, not sure about that, since I never used it that way. Maybe someone else can clarify. and add this to your CSS to get rid of browser defaults * { margin: 0; padding: 0; outline: none; text-decoration: none; border: 0; }
How to tween scale in Phaser 3
s4m_ur4i replied to makalu's topic in Phaser 3this.scene.tweens.add({ targets : this , scale : 10 ease : 'Linear', duration : 500, }); You can simplify it. targets only needs to be an array if it's multiple targets. scale: 1 does both X&Y yoyo default is false, as 0 of repeat and "this" of callbackScope.
setScrollFactor(0) sets sprites way off on camera.zoom(value)
s4m_ur4i replied to s4m_ur4i's topic in Phaser 3After digging through the documentation: I think this is not intended behavior. At least, if you have a game were the camera zoom out and in because of multiplayer or having an action platformer, there is no way to fix any sprite to the camera's view. If a sprite is set with sprite.setScrollFactor(0) it should be kept in view when the sprite's position is initially in view. Even if the camera is zoomed.Zoom is an option on the game config, and useful in a lot of cases to have your game with a base zoom. Rather than taking the scrollX and scrollY of the camera, which are minus values by zoom > 1, setScrollFactor should align to the worldview. Or at least it would be helpful to have a value that can be set to be fixed on zoom. Because, as far as I am aware, there is nothing to calculate the offset of the current zoom right now. solution: use a second scene for UI stuff, or alignment | https://www.html5gamedevs.com/profile/17504-s4m_ur4i/ | CC-MAIN-2020-24 | refinedweb | 1,250 | 67.86 |
Which syntax results in better performance?
var vRec = (bNoTracking ? tblOrders.AsNoTracking() : tblOrders);
return vRec
.Where(x => (x.WarehouseId == iWarehouseId) && (x.OrderId == iOrderId))
.FirstOrDefault<tblOrder>();
var vRec = (bNoTracking ? tblOrders.AsNoTracking() : tblOrders);
return (from rec in vRec
where (rec.WarehouseId == iWarehouseId) && (rec.OrderId == iOrderId)
select rec)
.FirstOrDefault<tblOrder>();
Both queries will be converted to the same SQL, meaning performance will be identical. It just depends on if you prefer the "fluent" syntax (
.Where()) or LINQ query expressions (
where).
The SQL generated from my test MSSQL database is as follows, revealed with LINQPad:
This looks to be about as optimized as it'll get, so I'd say no further tweaking is necessary unless you're running this select in some kind of loop. | https://codedump.io/share/NoCsChsxT4J3/1/linq-on-ef6-is-there-difference-in-terms-of-performance-between-query-syntax-and-method-calls | CC-MAIN-2017-17 | refinedweb | 121 | 62.24 |
Json data and general programming question
- hippylover last edited by
Hi, i have json data with open, close, high, low and time in unix time. How would i use backtrader to take the data as i feed it and get an alert every time there's a buy signal(for example ema goes over sma)?
Also, does backtester only take open and close data or can i feed it just last bid for example?
- hippylover last edited by
Well, i managed to get it to show on plot, but had to convert it to csv and put it in a file first... Is there no way to take the data directly from an array or something? Also, if i want each "period" for calculating/plotting ema/sma/etc to be 30-minutes, is it possible to set this or do i have to feed it half-hour data?
import backtrader as bt import requests from jq import jq import csv import backtrader.feeds as btfeeds class smaEmaCross(bt.SignalStrategy): def __init__(self): sma = bt.ind.SMA(period=50) ema = bt.ind.EMA(period=20) crossover = bt.ind.CrossOver(ema,sma) self.signal_add(bt.SIGNAL_LONG, crossover) cerebro = bt.Cerebro() cerebro.addstrategy(smaEmaCross) #get data currency = "MAID" symbol = "BTC" requestLine = '' + "histominute?fsym=" + currency + "&tsym=" + symbol + "&allData=True&e=Poloniex" res = requests.get(requestLine) start = str(int((datetime.datetime.now() - datetime.timedelta(minutes=10000)).timestamp())) dataDownload = jq('.Data[]|select(.time > ' + start + ')|{date: .time |strftime("%d/%m/%y"), time: .time |strftime("%H:%M:%S"), open: .open, close: .close, high: .high, low: .low, volume: .volumeto}').transform(res.json(), multiple_output=True) csvData = [] for i in dataDownload: csvData.append(str(i['date']) + "," + str(i['time']) + "," + str(i['open']) + "," + str(i['close']) + "," + str(i['high']) + "," + str(i['low']) + "," + str(i['volume'])) print(csvData) #write file with open("ohlc.txt","w") as outf: outf.write("date,time,open,close,high,low,volume\n") for x in csvData: outf.write(x + "\n") data = btfeeds.GenericCSVData( dataname="ohlc.txt", dtformat='%d/%m/%y', #date=0, time=1, open=2, close=3, high=4, low=5, volume=6, openinterest=-1, # -1 for not present #fromdate=datetime.datetime(2017, 1, 1), #todate=datetime.datetime(2017, 12, 31), reverse=False) cerebro.adddata(data) cerebro.run() cerebro.plot()```
- Curtis Miller last edited by
Since you can create a data feed with a pandas
DataFrame, perhaps consider using the pandas function
from_json()to convert the JSON data to a
DataFramethen use that
DataFrameas your data feed.
I'm also curious about data that isn't OHLC (like tick data).
Maybe you want to try to search community using words
bidor
tick, you may find several posts that can help. As remember it was also a blog post about it. Maybe not single post.
- backtrader administrators last edited by
You can create your own data feed and fill the values from your JSON data. See
You basically subclass and override
_loadto deliver your set of OHLC prices each time.
@hippylover said in Json data and general programming question:
Also, does backtester only take open and close data or can i feed it just last bid for example?
@Curtis-Miller said in Json data and general programming question:
I'm also curious about data that isn't OHLC (like tick data).
You can extend the data feed to pass any values you wish. See: Docs - Extending a Datafeed
In any case,
bid-askare values tied to ticks and not to larger timeframes (mostly resampled) which would only have the latest of the
bid-askvalues to act upon or else the next incoming. This platform doesn't use
bid-askfor anything.
@hippylover said in Json data and general programming question:
Also, if i want each "period" for calculating/plotting ema/sma/etc to be 30-minutes, is it possible to set this or do i have to feed it half-hour data?
With no time sample it is impossible to know what timeframe your data is in. But you can resample the data (and even replay it). See Docs - Data Resampling
You may also want to read the FAQ, to remind yourself of setting the right
timeframeand
compressionfor your data. See Community - FAQ | https://community.backtrader.com/topic/475/json-data-and-general-programming-question | CC-MAIN-2019-51 | refinedweb | 694 | 58.38 |
pause - wait for signal
Current Version:
Linux Kernel - 3.80
Synopsis
#include <unistd.h>
int pause(void);
Description
pause() causes the calling process (or thread) to sleep until a signal is delivered that either terminates the process or causes the invocation.80 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
License & Copyright
Copyright (c) 1992 Drew Eckhardt (drew@cs.colorado.edu), March 28, 1992 %% (michael@moria.de) Modified Sat Jul 24 14:48:00 1993 by Rik Faith (faith@cs.unc.edu) Modified 1995 by Mike Battersby (mib@deakin.edu.au) Modified 2000 by aeb, following Michael Kerrisk | https://community.spiceworks.com/linux/man/2/pause | CC-MAIN-2018-17 | refinedweb | 116 | 53.07 |
>>
Minimum number of coins that make a given value
There is a list of coin C(c1, c2, ……Cn) is given and a value V is also given. Now the problem is to use the minimum number of coins to make the chance V.
Note: Assume there is the infinite number of coins C.
In this problem, we will consider a set of different coins C{1, 2, 5, 10} are given, There is the infinite number of coins of each type. To make change the requested value we will try to take the minimum number of coins of any type. As an example, for value 22: we will choose {10, 10, 2}, 3 coins as the minimum.
Input and Output
Input: The required value. Say 48 Output: Minimum required coins. Here the output is 7. 48 = 10 + 10 + 10 + 10 + 5 + 2 + 1
Algorithm
minCoins(coinList, n, value)
Input: list of different coins, number of coins, given value.
Output: Minimum number of coins to get given value.
Begin if value = 0, then return 0 define coins array of size value + 1, fill with ∞ coins[0] := 0 for i := 1 to value, do for j := 0 to n, do if coinList[j] <= i, then tempCoins := coins[i-coinList[j]] if tempCoins ≠ ∞ and (tempCoins + 1) < coins[i], then coins[i] := tempCoins + 1 done done return coins[value] End
Example
#include<iostream> using namespace std; int minCoins(int coinList[], int n, int value) { int coins[value+1]; //store minimum coins for value i if(value == 0) return 0; //for value 0, it needs 0 coin coins[0] = 0; for (int i=1; i<=value; i++) coins[i] = INT_MAX; //initially all values are infinity except 0 value for (int i=1; i<=value; i++) { //for all values 1 to value, find minimum values for (int j=0; j<n; j++) if (coinList[j] <= i) { int tempCoins = coins[i-coinList[j]]; if (tempCoins != INT_MAX && tempCoins + 1 < coins[i]) coins[i] = tempCoins + 1; } } return coins[value]; //number of coins for value } int main() { int coins[] = {1, 2, 5, 10}; int n = 4, value; cout << "Enter Value: "; cin >> value; cout << "Minimum "<<minCoins(coins, n, value)<<" coins required."; return 0; }
Output
Enter Value: 48 Minimum 7 coins required.
- Related Questions & Answers
- Program to find number of coins needed to make the changes with given set of coins in Python
- C/C++ Program for Greedy Algorithm to find Minimum number of Coins
- Minimum number of given operations required to make two strings equal using C++.
- Program to find number of coins needed to make the changes in Python
- C++ program to count number of minimum coins needed to get sum k
- Minimum number of given moves required to make N divisible by 25 using C++.
- Minimum number using set bits of a given number in C++
- Minimum number of elements that should be removed to make the array good using C++.
- Minimum number of deletions to make a string palindrome in C++.
- Find out the minimum number of coins required to pay total amount in C++
- Minimum number of letters needed to make a total of n in C++.
- Minimum number of Appends needed to make a string palindrome in C++
- Minimum value that divides one number and divisible by other in C++
- Find minimum number of currency notes and values that sum to given amount in C++
- Minimum operations of given type to make all elements of a matrix equal in C++
Advertisements | https://www.tutorialspoint.com/Minimum-number-of-coins-that-make-a-given-value | CC-MAIN-2022-33 | refinedweb | 576 | 52.33 |
Hello. I am relatively knew to unity, and I am creating a 3D unity game that is first person character. I have text I would want for the player to click on, after reading, so the text would disappear. I have the coding for a keyboard click to have the text disappear (for good from the game) once a player reads it, they click down on the keyboard. Here is the code I am working with and it works however there is a problem....... code:
using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI;
public class TextClick : MonoBehaviour { public Text text;
void Update() {
if(Input.GetKeyDown("q"))
{
this.gameObject.SetActive(false);
}
}
}
The problem I am having is that I have multiple texts that I would like to use the same code for. So once a player reads it, they click down on the keyboard key that I have shown in the code. However, as much as it does work, I am finding out that all the text is disappearing once you click that key, even the text not seen in the gameplay yet.
The text is a crucial part of my game and is needed. So is there a code to use the same script on multiple objects at different times? The objects are UI texts labeled as Text_1 and Text_2 and so on.
What would be the coding I would need?
505 People are following this question.
How to change color of animated Text component
2
Answers
(Unity) My Text UI not updating
1
Answer
Inspector changes set TextMeshPro values on play
1
Answer
Hiding the part of a UI text behind a 3D object
1
Answer
Convert Audio Beeps And Dashes into Text
0
Answers | https://answers.unity.com/questions/1496500/click-ui-text-to-dissapear-in-3d-unity-2017-first.html | CC-MAIN-2020-24 | refinedweb | 290 | 71.65 |
This tutorial uses Visual Studio 2017 and ASP.NET Core 2.0.
My intention is to give you a practical introduction into developing ASP.NET Core 2.0 MVC and Web API apps with the SQLite database as an alternative to traditional SQL Server. A useful utility that comes in handy when working with the SQLite database is SQLiteStudio. Download SQLiteStudio from:. Extract the ZIP file and place contents in a separate folder. Run SQLiteStudio.exe. We will build an ASP.NET Core 2.0 app that uses the following Student entity:
ASP.NET Core 2.0 MVC project
In a working directory, create a folder named SQLiteWeb. Change to the SQLiteWeb directory. Try out some of these important .NET Core 2.0 commands:
- dotnet --help – this gives you a list of common commands
- dotnet restore – restore dependencies specified in the .NET project
- dotnet build - Builds a .NET project
- dotnet run --help – provides help information about the run command
- dotnet new --help – shows the types of templates that can be scaffolded. At the time of writing these are 18 different templates
- dotnet new mvc --help – shows switches that can be used when creating an MVC application
We will create an MVC application that uses "Individual authentication" and the SQLite database.
Execute the following terminal command from within the SQLiteWeb directory:
dotnet new mvc --auth Individual
A web app is created for you and all Nuget packages are automatically restored. To run the application, execute the following command inside the terminal window:
dotnet run
Notice a message similar to the following:
Hosting environment: Production
Content root path: E:\_DEMO\SQLiteWeb
Now listening on:
Application started. Press Ctrl+C to shut down.
Now listening on:
Application started. Press Ctrl+C to shut down.
As described in the message, point your browser to and you will see the default
ASP.NET Core page:
This runs your application in a web server called Kestrel that is listening on port 5000. Register a new user.
Stop the web server by hitting Ctrl+C. If you are curious about where the data is saved and the location of the SQLite database, you will find a *.db file located in the bin/Debug/netcoreapp2.0 directory. Have a peek at its contents using the SQLiteStudio utility mentioned earlier in this article.
To open your web application in Visual Studio, start Visual Studio then open the SQLiteWeb.csproj file.
File >> Open >> Project Solution
Hit CTRL + F5 in Visual Studio 2017. This time, the web application will start and will be hosted by IIS Express.
When working with ASP.NET Core, you will need to go to the command-line interface frequently. Add a command-prompt extension to make it easier. Click on Tools >> Extensions and Updates…
Find an extension named “Open Command Line” as shown below.
If you have not installed it already, install the above extension.
In solution explorer, right-click on the SQLiteWeb node the choose “Open Command Line” >> “Default (cmd)”
This opens a regular operating system terminal window.
The Student class
Inside of the Models folder, add a class file named Student.cs. Use the following code for the class file:public class Student {
public int StudentId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string School { get; set; }
public DateTime StartDate { get; set; }
}
Add the following property to the ApplicationDbContext.cs class file located in the Data directory.
public DbSet<Student> Students { get; set; }
Notice the connection string in the appsettings.json file:
{
"ConnectionStrings": {
"DefaultConnection": "DataSource=app.db"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
}
}
"ConnectionStrings": {
"DefaultConnection": "DataSource=app.db"
},
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning"
}
}
}
MigrationsWe are now ready to do some migrations:
- Compile your application
- Open a command terminal inside the main project SQLiteWeb folder
- Add a migration to the project with the following ef command:
dotnet ef migrations add "First Migration"
Notice that class files are created in the Data/Migrations folder.
- We will then update the database with the following terminal command:
dotnet ef database update
You will experience the following error:You will experience the following error:
SQLite does not support this migration operation ('AddForeignKeyOperation'). For more information, see.
This error is caused by the fact the SQLite cannot alter tables and indexes during migration. See this article. The workaround is to comment out all the lines of code in the "Data/Migrations/xxxxxx_First Migration.cs" file that do not pertain to the Students entity. This should be done in both the up() and down() methods. Thereafter, run the "dotnet ef database update" command again and it should complete successfully.
Seed DataBefore we carry out code first migrations, let us first create some seed data:
- In the Models folder, create a class named DummyData.cs.
- Add the following Initialize() method code inside the DummyData class:
public static void Initialize(ApplicationDbContext db) {
if (!db.Students.Any()) {
db.Students.Add(new Student {
FirstName = "Bob",
LastName = "Doe",
School = "Engineering",
StartDate = Convert.ToDateTime("2015/09/09")
});
db.Students.Add(new Student {
FirstName = "Ann",
LastName = "Lee",
School = "Medicine",
StartDate = Convert.ToDateTime("2014/09/09")
});
db.Students.Add(new Student {
FirstName = "Sue",
LastName = "Douglas",
School = "Pharmacy",
StartDate = Convert.ToDateTime("2016/01/01")
});
db.Students.Add(new Student {
FirstName = "Tom",
LastName = "Brown",
School = "Business",
StartDate = Convert.ToDateTime("2015/09/09")
});
db.Students.Add(new Student {
FirstName = "Joe",
LastName = "Mason",
School = "Health",
StartDate = Convert.ToDateTime("2015/01/01")
});
db.SaveChanges();
}
}
To generate seed data, we will first inject the dependency “ApplicationDbContext context” into the arguments of the Configure() method in Startup.cs. Next, we can make a call to seed the data at the bottom of the Configure() method with the following statement:
DummyData.Initialize(context);
At this point, data will not have been seeded yet because this happens when the application is actually run.
Creating an MVC UI
Let us seed the data by running your web application in a browser. You should see the same page as we saw earlier. Let us create a UI so that we can see the seeded data.
Right-click on the Controllers folder and choose Add >> New Item… >> Controller...
Choose "MVC Controller with views, using Entity Framework" then click on Add.
Model Class=Student, Data context class=ApplicationDbContext
Click on Add. If you are asked to save the solution file then accept the default location and save it in your project root folder.
You’ll notice that the controller takes a ApplicationDbContext as a constructor parameter. ASP.NET dependency injection will take care of passing an instance of ApplicationDbContext into your controller.
The controller contains an Index action, which displays all students in the database, and a Create action, which inserts a new student into the database.
- Let us add a link to the Students controller on the main page of our application. Open _Layout.cshtml under Views/Shared.
- Paste the following markup in the navigation section around line 36:
- Run the application then click on the Students link. You should see the dummy data that we created.
- Add a new student to the database.
The WebAPI Controller
Let us add a Web API Studentsapi controller to our projects.
- Right-click on the Controllers folder >> Add > Controller...
- Select "API Controller with actions, using Entity Framework" then click Add.
- Model class=Student, Data context class=ApplicationDbContext, Controller name=StudentsapiController
- Click on Add
- Hit CTRL-F5 on your keyboard and point your browser to /api/studentapi. You will see the seed data appearing as JSON in the browser: | http://blog.medhat.ca/2017/10/aspnet-core-mvc-web-api-app-with-ef.html | CC-MAIN-2020-29 | refinedweb | 1,241 | 57.98 |
.tbn *Depricated, Eden format but still functional
4 Folder Thumbnails
5 Program Thumbnails
In order to display the image displayed for a game, emulator or application, simply rename the desired image default.tbn.tbn
- This includes an embedded thumbnail cached with the fully qualified name.
- 3) Remote filename.tbn
- This is a filename.tbn.tbn
- This is a foldername.tbn on a remote share/folder which is then cached.
Examples:
In this example audiofilename.mp3 will use audiofilename.tbn as thumbnail:
Music\path\audiofilename.mp3 Music\path\audiofilename.tbn
The same goes for playlists, cue-sheets, SHOUTcast, and internet-stream files, example:
7 Video Thumbnails
You can either use your own custom thumbnails or XBMC can retrieve video thumbnails from the internet via scrapers and cache them locally.
XBMC applies thumbnails to video files in the following order:.tbn.tbn Movies\path\moviename.tbn
- Movies in Folders
In the following example "videofilename.avi" will now use the thumbnail "poster.(jpg/png)".
Movies\path\poster.(jpg/png) Movies\path\videofilename.avi
Note that movie.tbn.tbn Streams\path\videostreamname.strm Streams\path\videostreamname.tbn
-)
11 Hashing
The thumbnail .tbn file is created via a hashing function. As explained earlier, the hash is based off the CRC32 of the pathname (plus filename) in lowercase. Files which are local are hashed using their drive letter. Remote files are hashed using the
smb:// protocol designation and optional username and password.
11.1 Examples
123456789returns
0376e6e7
F:\Videos\Nosferatu.avireturns
2a6ec78d
smb://user:pass@server/share/directory/returns
c5559f13
smb://user:pass@server/share/directory/file.extreturns
8ce36055
Remember:
- When hashing remote shares, use the path as displayed in the
sources.xmlfile, which can include the username and password.
- When hashing directories for thumbnails, include the final slash.
11.2 Sample code
The following code snippets produce the same output as the XBMC hashing function.
11.2.1 C#
public string Hash(string input) { char[] chars = input.ToCharArray(); for (int index = 0; index < chars.Length; index++) { if (chars[index] <= 127) { chars[index] = System.Char.ToLowerInvariant(chars[index]); } } input = new string(chars); uint m_crc = 0xffffffff; byte[] bytes = System.Text.Encoding.UTF8.GetBytes(input); foreach (byte myByte in bytes) { m_crc ^= ((uint)(myByte) << 24); for (int i = 0; i < 8; i++) { if ((System.Convert.ToUInt32(m_crc) & 0x80000000) == 0x80000000) { m_crc = (m_crc << 1) ^ 0x04C11DB7; } else { m_crc <<= 1; } } } return String.Format("{0:x8}", m_crc); }
11.2.2 Python
def get_crc32( string ): string = string.lower() bytes = bytearray(string.encode()) crc = 0xffffffff; for b in bytes: crc = crc ^ (b << 24) for i in range(8): if (crc & 0x80000000 ): crc = (crc << 1) ^ 0x04C11DB7 else: crc = crc << 1; crc = crc & 0xFFFFFFFF return '%08x' % crc
11.2.3 Perl
sub get_crc32 { my $string = shift; my @bytes = unpack 'C*', $string; my $crc = 0xffffffff; for my $b (@bytes) { $crc = $crc ^ ($b << 24); for(my $i = 0; $i < 8; $i++) { if ($crc & 0x80000000 ) { $crc = ($crc << 1) ^ 0x04C11DB7; } else { $crc = $crc << 1; } } $crc = $crc & 0xFFFFFFFF; } return sprintf('%08x', $crc); }
11.2.4 PHP
Code provided by tamplan and narfight.
private function _get_hash($file_path) { $chars = strtolower($file_path); $crc = 0xffffffff; for ($ptr = 0; $ptr < strlen($chars); $ptr++) { $chr = ord($chars[$ptr]); $crc ^= $chr << 24; for ((int) $i = 0; $i < 8; $i++) { if ($crc & 0x80000000) { $crc = ($crc << 1) ^ 0x04C11DB7; } else { $crc <<= 1; } } } // Système d'exploitation en 64 bits ? if (strpos(php_uname('m'), '_64') !== false) { //Formatting the output in a 8 character hex if ($crc>=0) { $hash = sprintf("%16s",sprintf("%x",sprintf("%u",$crc))); } else { $source = sprintf('%b', $crc); $hash = ""; while ($source <> "") { $digit = substr($source, -4); $hash = dechex(bindec($digit)) . $hash; $source = substr($source, 0, -4); } } $hash = substr($hash, 8); } else { //Formatting the output in a 8 character hex if ($crc>=0) { $hash = sprintf("%08s",sprintf("%x",sprintf("%u",$crc))); } else { $source = sprintf('%b', $crc); $hash = ""; while ($source <> "") { $digit = substr($source, -4); $hash = dechex(bindec($digit)) . $hash; $source = substr($source, 0, -4); } } } return $hash; }
11.2.5 Javascript
Code provided by Fiasco and baderj.
Number.prototype.unsign = function(bytes) { return this >= 0 ? this : Math.pow(256, bytes || 4) + this; }; function FindCRC(data) { var CRC = 0xffffffff; data = data.toLowerCase(); for ( var j = 0; j < data.length; j++) { var c = data.charCodeAt(j); CRC ^= c << 24; for ( var i = 0; i < 8; i++) { if (CRC.unsign(8) & 0x80000000) { CRC = (CRC << 1) ^ 0x04C11DB7; } else { CRC <<= 1; } } } if (CRC < 0) CRC = CRC >>> 0; var CRC_str = CRC.toString(16); while (CRC_str.length < 8) { CRC_str = '0' + CRC_str; } return CRC_str; }
11.2.6 MySQL Function
Found this to be very useful when using a MySQL backend and moving/updating files. Code provided by User:Nxj18
create function fnXBMCHash(sourceString VARCHAR(2000)) returns varchar(8) deterministic begin declare crc bigint unsigned; -- bigint to prevent casting/overflow issues declare len, cur, i int; declare mask, xorBase, curCharCode, intMask bigint unsigned; set intMask = pow(2,32) - 1; set crc = pow(2,32) - 1; -- 0xFFFFFFFF set sourceString = LOWER(TRIM(sourceString)); set mask = pow(2,31); -- 0x8000000 set xorBase = 79764919; -- 0x04C11DB7 set len = LENGTH(sourceString), cur = 0; while cur < len do set curCharCode = ASCII(SUBSTRING(sourceString,cur+1,1)); set crc = (crc ^ (curCharCode << 24)) & intMask; set i = 0; while i < 8 do set crc = (case (crc & mask) when mask then (crc << 1) ^ xorBase else (crc << 1) end) & intMask; set i = i + 1; end while; set cur = cur + 1; end while; return lpad(hex(crc),8,'0'); end;
11.2.7 AutoIT function
Code provided by Nexus.Commander.
func CRC32_XBMC($string_input) $chars = StringSplit(StringLower($string_input),'',2) $crc = 0xffffffff For $ptr = 0 To UBound($chars)-1 $chr = StringToBinary($chars[$ptr],4) $crc = BitXOR($crc,BitShift($chr,-24)) For $i = 0 To 7 if BitAND($crc,0x80000000) = 0x80000000 Then $crc = BitXOR(BitShift($crc,-1),0x04C11DB7) else $crc = BitShift($crc,-1) EndIf Next Next Return Hex($crc) EndFunc
11.2.8 Java
public String Hash(String input) { int m_crc = 0xffffffff; input = input.toLowerCase(); byte msg[] = input.getBytes(); for (int i = 0; i < msg.length; i++) { int p = (msg[i]) << 24; m_crc ^= p; for (int j = 0; j < 8; j++) { if ((m_crc & 0x80000000) == 0x80000000) { m_crc = (m_crc << 1) ^ 0x04C11DB7; } else { m_crc <<= 1; } } } return String.format("%08x", m_crc); } | http://kodi.wiki/view/Thumbnails?oldid=68160 | CC-MAIN-2014-42 | refinedweb | 1,005 | 57.67 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi !
We need to do a query using a Join between two diferents Issuetypes linked then I should use the SQL query on JIRA.
I have generated the database scheme using the JIRA Diagram Scheme Generator plugin. But, I don't undestand which step should I execute to be able to run an SQL query and where can I run it ?
Please, I'll apreciate your help.
Thanks in advance.
Generally, running SQL against a JIRA database is the worst way to do any form of reporting. It is not designed to be reported on.
Of course, the application has no way to get to the database either, as you shouldn't be doing it.
And, never, never, never, write to a JIRA database.
To run your SQL, you will need a database tool to connect to the database.
But I'd strongly recommend that you not bother, as it's very likely that you won't know enough about the database structure to actually get what you want from it.
Could you explain what question you're trying to answer? Not "I think I want to use SQL", but what are you trying to find out about your JIRA system?
Hi Nic,
Thanks for your answer.
I need to do an SQL query to get all the Stories with the Sprint greather than the parent Epics and the Task with their Sprint greather than the parent Stories in a JIRA project.
I found some choices but , based on Atlassian answers, it is not possible , and they have recommended me to use the Database Scheme.
Please, could you guide me to do it ?
Thanks in advance.
I'm sorry, I can't work out what that means.
The main problem I have is understanding what "sprint greather than" mean?
For example you say: Stories with the Sprint greather than the parent Epics
Does that mean
There's further problems with all of the interpretations I can make, but I know I do not understand the question
Iam using a groovy script in a transition postfunction to ask the database for the person that did a specific transition in the issue workflow. For that the might exist an api function too but writing the query and executing it was for me the fastest way.
import com.atlassian.jira.ComponentManager import com.atlassian.jira.component.ComponentAccessor import groovy.sql.Sql import java.sql.Connection import org.ofbiz.core.entity.ConnectionFactory import org.ofbiz.core.entity.DelegatorInterface import com.atlassian.jira.issue.Issue import com.atlassian.jira.issue.MutableIssue; import com.atlassian.jira.issue.ModifiedValue import com.atlassian.jira.issue.util.DefaultIssueChangeHolder import com.atlassian.jira.user.util.UserManager import com.atlassian.jira.util.ImportUtils //import com.atlassian.crowd.embedded.api.User //Issue issue = issue //def id = issue.getId() ComponentManager componentManager = ComponentManager.getInstance() def delegator = (DelegatorInterface) componentManager.getComponentInstanceOfType(DelegatorInterface.class) String helperName = delegator.getGroupHelperName("default"); def sqlStmt = """ SELECT a.author as 'doer' FROM changegroup as a JOIN changeitem as b ON b.groupid = a.id WHERE b.field = 'status' AND a.issueid = ${issue.id} AND b.oldstring = 'In Progress' AND b.newstring = 'Review' ORDER BY a.created DESC LIMIT 1 """ Connection conn = ConnectionFactory.getConnection(helperName) Sql sql = new Sql(conn) try { StringBuffer sb = new StringBuffer() sql.eachRow(sqlStmt) { sb << it.doer } def userManager = (UserManager) ComponentAccessor.getUserManager() def user = userManager.getUserByName(sb.toString()) issue.setAssignee(user) } finally { sql.close() }
To execute groovy scripts in transition postfunction you need the ScriptRunner plugin.
Hope you will get an idea how to do what you want to do.
That method may not give you the right answer if a change was recent.
mmh, so there is a delay between transitioning the issue and writing the change to database?
Isnt a problem for us cause there are a lot of review and stage testing status that take several days before that script is executed.
Do you have experience when things like changes, new issues etc are writing to database?
You're probably alright then as you're giving it plenty of time.
People reporting via SQL and getting the wrong answer was the first thing that made me start looking into JIRA database usage, and very quickly lead me to "the database is always the worst option". There are times when it's the only option, but it's always the worst.
Thanks Nic!
For more clarity, we updated our previous question with the necessary details. Please see
Thanks in advance.. | https://community.atlassian.com/t5/Jira-Software-questions/How-to-run-an-SQL-query-to-get-data-from-JIRA-Project/qaq-p/607873 | CC-MAIN-2018-30 | refinedweb | 765 | 52.15 |
About the error when you try to run the script using python3 - it is strange
About the wrong measurements, Do you use pullup resistor?
yes, I`m using a 4,7k Resistor
Here is a Picture of the wiring:
I just switched from 3,3v to 5v...and the measurements seam to be Correct.
Ok, so far so good.I`ve edited the sudo crontab and rebooted the Pi, everything shows up just fine.
Now, the DHT22 just has to stay online...
@ognqn.chikovI want to thank you for your help. I was struggling with this sensor for Weeks (even month)As it seams it is working great.....for now...Thank you very much!
You`ve done a great job.
Hello,It is ok. In Monday I will receive mine, I will update you also with my results. The great job was done by everyone here Cheers.
Hello everyone,Sadly my DHT22 went Offline 13hours ago.
Mine is working very good. Ah..did you added the starting of the file in crontab? One time I was running the file using sudo python3 filename and after a time the connection through Putty was interrupted and also the running of the file?
Yeah,... you're right.i started the script manually and forgot to edit the crontab...
I changed it from:@reboot python /home/pi/python/tempsensor.pyto@reboot python2 /home/pi/python/tempsensor.py
@reboot python /home/pi/python/tempsensor.py
@reboot python2 /home/pi/python/tempsensor.py
This is how it should look like, right?
Not exactly,I think the "&" sign has to be included at the end. This will let to process other scripts.
@reboot python /home/pi/python/tempsensor.py &
or if you run it with python3:
@reboot python3 /home/pi/python/tempsensor.py &
@reboot python3 /home/pi/python/tempsensor.py &
EDIT: I still wait for my sensor to test
Hello new to cayenne and the community I want to start of saying thanks to all ! I am also trying to use this sensors humidity reading to control my relay that I have installed . I have already successfully installed a DS18B20 . How difficult is it use use mattq and install the dht22. Any help is greatly appreciated
Austin
It is very easy. Let's start from somewhere? Do you need help with wiring?
Hello,
this time i used:@reboot python /home/pi/python/tempsensor.py &
...And the DHT22 stayed Online the last 24h. \o/
Mine is ready to be fired up
Actually...It`s really easy..even in this config (MQTT Device & Running Script on the Pi)
You just connect the Sensor to the Pi:
Step 1: You create a MQTT-Device in the Cayenne Dashboard and Write down (or copy) the MQTT Informations.Step 2: You put the Code from above into a *.py FileStep 3: You Insert your MQTT-Informations & the GPO Pin you used & save the FileStep 4: You edit the "Crontab" of the Pi so the Script(File) will start when the Pi boots up.
This is just a basic overview of the needed Steps but all in all it`s easy to set up...
Thank you for the response . I will be receiving my sensor this Friday from amazon , is this the correct sensor to purchase ? i already have a 4.7 K resistor and jumper wires . What exactly is a MQTT device ?
-Austin
Hello @austin.rodriguez210, I am using 10K pullUp resistor. I don't know which is better and correct, but Adafruit recommend 10K resistor. Arduino recommend 4.7. It will work with both, for sure
I can describe the MQTT Device as a technology capable of connecting remote data-collecting devices. If you are curious, you can read this publication -> HERE
I edited the first post to reflect the mqttc.loop() change.
Finally had some time at home to look at this a bit more. I do indeed have the mqttc.loop() line in my code (at the bottom of the try, but that should not matter), so I'm still not sure why it is still not working for me. I'll leave this Pi on for a few days to see how it goes.
import paho.mqtt.client as mqtt
import time
import sys
import Adafruit_DHT
time.sleep(30) #Sleep to allow wireless to connect before starting MQTT
mqttc = mqtt.Client(client_id="280a0f40-d51b-11e6-b089-9f6bfa78ab33")
mqttc.username_pw_set("95d334c0-a90b-11e6-a7c1-b395fc8a1540", password="5c5a35f990e18e06f068d19d5f56f169cce24c48")
mqttc.connect("mqtt.mydevices.com", port=1883, keepalive=60)
topic_dht11_temp = "v1/95d334c0-a90b-11e6-a7c1-b395fc8a1540/things/280a0f40-d51b-11e6-b089-9f6bfa78ab33/data/1"
topic_dht11_humidity = "v1/95d334c0-a90b-11e6-a7c1-b395fc8a1540/things/280a0f40-d51b-11e6-b089-9f6bfa78ab33/data/2"
topic_dht22_temp = "v1/95d334c0-a90b-11e6-a7c1-b395fc8a1540/things/280a0f40-d51b-11e6-b089-9f6bfa78ab33/data/3"
topic_dht22_humidity = "v1/95d334c0-a90b-11e6-a7c1-b395fc8a1540/things/280a0f40-d51b-11e6-b089-9f6bfa78ab33/data/4"
while True:
try:
humidity11, temp11 = Adafruit_DHT.read_retry(11, 17)
humidity22, temp22 = Adafruit_DHT.read_retry(22,)
mqttc.loop()
except (EOFError, SystemExit, KeyboardInterrupt):
mqttc.disconnect()
sys.exit()
The issue might be the time.sleep(5) keeping the loop from processing. You could try removing that and setting the timeout to 5 seconds in the mqttc.loop() call, or using a check for elapsed time in the while loop and only running the sensor code every 5 seconds.
time.sleep(5)
mqttc.loop()
@adam I can test with your code for a couple of days. I can start the tests immediately | http://community.mydevices.com/t/dht11-dht22-with-raspberry-pi/2015?page=5 | CC-MAIN-2017-39 | refinedweb | 904 | 77.64 |
Creating a Simplified Asynchronous Call Pattern for Windows Forms Applications
David Hill
Microsoft Corporation
March 2004
Summary: By way of a blog entry, David Hill explains how you can implement an asynchronous call pattern that allows you to consume Web services from a Windows Forms application without having to worry about threads. (8 printed pages)
Download the AsyncServiceAgent.msi sample file.
Note This article is derived from a blog entry at. As such, the information in this article is provided "AS IS" with no warranties, and confers no rights. This article does not represent the thoughts, intentions, plans or strategies of Microsoft. It is solely the opinion of the author. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. The original blog entry was edited for readability and Microsoft style guidelines.
Contents
Introduction
Service Agents
The UI Thread
The .NET Asynchronous Call Pattern
A Simplified Asynchronous Call Pattern
Is It Worth It?
Introduction
I have written a number of smart client applications recently that employ some form of asynchronous call behavior to prevent the UI freezing while the application makes a Web service call in the background. Now, it's true that the .NET Framework provides a generic pattern for making asynchronous calls, but I find that this is sometimes a little unwieldy to use from within a Windows Forms application because of the need to ensure that the UI thread is used correctly.
In this article, I'll describe how you can implement a simpler asynchronous call pattern that allows you to consume Web services from a Windows Forms application without having to worry about how background threads interact with the UI thread ever again.
Service Agents
Visual Studio® .NET generates a nice Web service proxy class that allows the Web service to be called asynchronously, but this proxy class implements the generic .NET Framework asynchronous call pattern which, as I illustrate below, is rather inconvenient to use from a Windows Forms application. For this reason, I generally don't use the generated proxy classes directly, but instead employ an intermediate Service Agent class.
Service agents are classes that provide additional functionality that help a client interact with a Web service. Service agents can implement many useful features, such as data caching, security credential management, offline operation support, and so on. The Service Agent class created in this article provides a much simpler asynchronous call pattern than what is provided by the proxy class.
I could have built the additional functionality into the generated proxy class directly, but I like to leave the proxy class exactly as generated by Visual Studio and only change it by hand when absolutely necessary. Apart from anything else, this prevents me from losing code when I refresh the Web reference. The Service Agent class uses the generated Web service proxy class to make the actual Web service calls.
The UI Thread
An application starts off with one thread that is used to create and manage the user interface. This thread is called the UI thread. A developer's natural instinct is to use the UI thread for everything, including making Web service calls, remote object calls, calls into the database, and so on. This can lead to major usability and performance issues.
The problem is that you can never reliably predict how long a Web service, remote object, or database call takes. And if you make such a call on the UI thread, there will come a time when the UI locks up and you have irritated the user to no end.
Naturally, you would want to do these kinds of calls on a separate thread, but I would go a step further and say that you should do all non-UI related tasks on a separate thread. I am firmly of the opinion that the UI thread should be used solely for managing the UI and all calls to objects where you can't absolutely guarantee sub-second (or better) response times should be asynchronous, be they in-process, cross-process, or cross-machine.
In any case, to help make asynchronous calls easier to handle from the UI thread, I have been playing with a simplified asynchronous call pattern that looks something like the one that will be available in the Visual Studio 2005. To start with, let's examine how the normal .NET Framework asynchronous call pattern works.
The .NET Asynchronous Call Pattern
An object that supports the.NET Framework asynchronous call pattern, such as a generated Web service proxy class, has a Begin and an End method for each exposed Web method. To make an asynchronous call, a client calls the Begin method, which returns immediately, or at least once it has setup a separate thread to make the actual Web service call. At some later point in time, the client calls the End method when the Web service call has been completed.
When does the client know when to call the End method? The Begin method returns an IAsyncResult object that you can use to track the progress of the asynchronous call, and you could use this object to explicitly wait for the background thread to finish, but doing this from the UI thread defeats the whole point of doing the work synchronously. A better approach, for a process with a user interface, is to register a callback so that you are notified automatically when the work is completed.
Let's look at some sample code. In this example, say we want to retrieve some customer data from a Web service asynchronously. We have a Web service proxy object and it supports a Web method named GetCustomerData. We can start the Web service call and register for a callback using the following code, which we assume is invoked on the UI thread in response to some user interaction with the application's user interface.
private void SomeUIEvent( object sender, EventArgs e ) { // Create a callback delegate so we will // be notified when the call has completed. AsyncCallback callBack = new AsyncCallback( CustomerDataCallback ); // Start retrieving the customer data. _proxy.BeginGetCustomerData( "Joe Bloggs", callBack, null ); }
Where CustomerDataCallback is the method that gets called when the Web service call finally returns. In this method, we need to call the End method on the Web service proxy object to actually retrieve the customer data. We might implement this method like so:
Now, it is important to note that this method is called on the background worker thread. If we want to update the UI with the newly obtained data (say we want to update a data grid control to display the customer data), we have to be careful to do this on the UI thread. If we don't, then all manner of strange things may happen and we will have a difficult time diagnosing which bug to fix.
So how do we switch threads? Well, we can use the Service Agent method, which all Control derived objects implement. This allows us to specify a method that is called on the UI thread, and that method is where we can safely update our user interface. To use the Control.Invoke method, we have to pass a delegate to our UI update method. Our CustomerDataCallback method would then look something like this:
public void CustomerDataCallback( IAsyncResult ar ) { // Retrieve the customer data. _customerData = _proxy.EndGetCustomerData( ar ); // Create an EventHandler delegate. EventHandler updateUI = new EventHandler( UpdateUI ); // Invoke the delegate on the UI thread. this.Invoke( updateUI, new object[] { null, null } ); }
The UpdateUI method might be implemented like this:
While this is not exactly rocket science, I find the need to make such a 'double hop' needlessly complicated. The problem is that the original caller of the asynchronous method (that is, the WinForm class in this example) is made to be responsible for switching threads, and this requires using another delegate and the Control.Invoke method.
A Simplified Asynchronous Call Pattern
One technique that I often employ to lessen the complexity, and the amount of code required, of making asynchronous calls is to factor the thread switching and delegate stuff into an intermediate class. This makes it easy to make asynchronous calls from the UI class without having to worry about such things as threads and delegates. I call this technique Auto Callback. Using this technique, the example above would look like so:
Once the Web service call completes, the following method is invoked automatically.
The name of the callback method is inferred from the name of the original asynchronous call (so there is no need for constructing and passing a delegate), and it is guaranteed to call on the correct thread (so there is no need to use Control.Invoke). This approach is simpler and less error-prone.
Now, there's no such thing as a free lunch, so where is all the magic code that enables this much simpler model? Well, it's built into the ServiceAgent class, which looks like this:
public class ServiceAgent : AutoCallbackServiceAgent { private CustomerWebService _proxy; // Declare a delegate to describe the autocallback // method signature. private delegate void GetCustomerDataCompletedCallback( DataSet customerData ); public ServiceAgent( object callbackTarget ) : base( callbackTarget ) { // Create the Web service proxy object. _proxy = new CustomerWebService(); } public void BeginGetCustomerData( string customerId ) { _proxy.BeginGetCustomerData( customerId, new AsyncCallback( GetCustomersCallback ), null ); } private void GetCustomerDataCallback( IAsyncResult ar ) { DataSet customerData = _proxy.EndGetCustomerData( ar ); InvokeAutoCallback( "GetCustomerDataCompleted", new object[] { customerData }, typeof( GetCustomersCompletedCallback ) ); } }
The service agent is easy to write in this case and does not require much code. It is fully reusable, so we only need to write it once and we can use it from any WinForm class. We are effectively shifting the responsibility for managing the threading issues away from the developer of the client code and on to the developer of the object that provides the asynchronous call API. Which in this case is the service agent, which is where it belongs. We can of course use this technique on any object that provides an asynchronous API.
The AutoCallbackServiceAgent base class is a simple generic class that implements the InvokeAutoCallback method. It looks like this:
public class AutoCallbackServiceAgent { private object _callbackTarget; public AutoCallbackServiceAgent( object callbackTarget ) { // Store reference to the callback target object. _ callbackTarget = callbackTarget; } protected void InvokeAutoCallback( string methodName, object[] parameters, Type delegateType ) { // Create a delegate of the correct type. Delegate autoCallback = Delegate.CreateDelegate( delegateType, _callbackTarget, methodName ); // If the target is a control, make sure we // invoke it on the correct thread. Control targetCtrl = _callbackTarget as System.Windows.Forms.Control; if ( targetCtrl != null && targetCtrl.InvokeRequired ) { // Invoke the method from the UI thread. targetCtrl.Invoke( autoCallback, parameters ); } else { // Invoke the method from this thread. autoCallback.DynamicInvoke( parameters ); } } }
This code creates a delegate to the callback method and then decides whether to invoke it on the calling thread or the UI thread. If the invocation target is a control-derived object, then it invokes the callback method on the UI thread if required.
For those interested in such details, if you look at the code closely, you might see that it can be simplified if we didn't have to specify the auto callback delegate in the derived class. If we didn't need to specify the signature of the callback delegate, we could handle pretty much everything automatically and the derived service agent class would only need to implement the single line of code in the BeginGetCustomerData method.
Why do we need to specify this delegate? Well, it turns out that if we want to use the Control.Invoke method, we need a delegate. It is unfortunate that the .NET Framework developers didn't provide a version of this method that took a MethodInfo object because that would have made life a lot easier when writing generic code.
An alternative approach is to specify a well-known delegate type and use it for all callback method signatures. For instance, we could mandate that all auto callback methods have a method signature that takes a generic array of objects and passes the Web service parameters back to the client that way. The delegate declaration might look like the following:
Using this delegate would mean that we can dramatically simplify the service agent code, but would have to cast the returned data to the expected type in the client code.
Is It Worth It?
Is it worth all the hassle to implement a service agent class like this one? Well, it depends on how much you want to simplify the lives of the UI developers. Writing a service agent class like this one doesn't necessarily mean less code, but it does represent a better distribution of responsibilities between the UI and service agent developers.
As well as providing a simpler asynchronous call pattern, we can add other interesting functionality to the service agent class. I'll be building on these basic service agent ideas in the future to show you how to implement some interesting functionality into the service agent, such as automatic local data caching. Building such functionality into the service agent class means that the UI developer has a lot less to worry about moving forward. | https://msdn.microsoft.com/en-us/library/ms996483.aspx | CC-MAIN-2015-18 | refinedweb | 2,193 | 51.28 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.