text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Most Restful APIs build with the very popular Django Rest Framework. The general use case of the Django Rest Framework is to build private or public APIs. The Framework is actually a package that can extend the existing Django functions and make it easier for you to create those standalone API.
How does Django work?
I am going to assume here that you already know the Django framework. But in case you don’t then Django is a great web development framework build on python which can handle both small web apps as well as large enterprise apps. Some big name companies are using Django for their applications, like Robinhood, Instagram, Zapier and Disqus!
Django was build in a way that it uses a similar structure to MVC. MVC stands for Model (databases layout), View (the templates), Controller (the logic). For Django, this is actually MVT (Model, view, template) - the view is the controller with Django. This structure is used to simplify and organize your code. If you have been a few years in the industry (like me), then I bet you remember the days that everything was in the same file. The the database queries, the templates and whatnot. Glad those days are over now.
This MVC standard is great for building dynamic websites. Though, there is an issue with this. With every change and every update you do, it needs to refresh the page. It’s not
reactive based on the things you do.
What’s the Django rest framework?
The Django Rest Framework is a Django package that can be installed in your environment with
pip install djangorestframework (more on that later) and it’s used to build Restful APIs. Actually, you can do the exact same thing with Django without this package. Though, in that case you would be reinventing the wheel quite a few times. The Django Rest Framework package is a huge timesaver as it will set you up with everything you need to build APIs quickly.
The Django rest framework doesn’t have templates. It returns JSON text (or only a status code). It can have multiple use cases: you can use this library to create endpoints to be consumed by end users as a private API, you can use the endpoints to be used for a mobile app or you can use the endpoints to be consumed by your javascript frontend. We are going to dive a bit deeper in the first and last usecase in this tutorial. Here is an example of what the API could return to you when you create an endpoint to get a specific user:
{ "id": 0, "email": "[email protected]", "first_name": "string", "last_name": "string", "position": "string", "phone": "string", "start": "2018-05-31T00:46:29Z", "timezone": "string", "is_active": true }
That’s all it is, really.
How do we install the Django Rest Framework?
Before you can actually create a Django Rest API, you will have to install it like any other plugin you install for Django. Simply use pip (
pip install djangorestframework) or you can clone the project from Github (
git clone [email protected]:encode/django-rest-framework.git - not recommended).
Then in your
settings.py add
rest_framework in the
INSTALLED_APPS array.
Great, Django Rest Framework is installed now. Let’s create the Django Rest API now.
Django Rest Framework tutorial: a very simple API
For this example API, we will create an API around the object Article. It will have a title field, a text field and a date field.
Let’s create the model
Article first. In your
models.py file add this:
class Article(models.Model): title = models.CharField(max_length=500) content = models.TextField() date = models.DateTimeField(auto_now_add=True)
With this model, it will automatically add a date when an article is created. Now, we need to create a serializer to make sure that the data gets sanitized and published correctly. if you come from a background with using forms in Django, then you will probably recognize that there is the same pattern with the seralizers.
If not already there, create a
serializers.py file in your app folder. Here is how our Article serializer will look like:
from rest_framework import serializers from yourappname.models import Article class ArticleSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = Article fields = ('id', 'title', 'content', 'date')
Up next we need to create a view in our
views.py. This is to tell Django what to show exactly with what serializer and what model. Here is how ours will look like:
from rest_framework import viewsets from yourappname.models import Article from yourappname.serializers import ArticleSerializer class ArticleViewSet(viewsets.ModelViewSet): queryset = Article.objects.all() serializer_class = ArticleSerializer
As you can see we created a
viewset that is inherited from the
ModelViewSet. The
ModelViewSet is a set of views that is build to create simple requests to this view. Like
get,
patch (you can overwrite any of these). As you can see, we added a
queryset, which is where the records will be pulled from. In this case, we want to display articles, hence why we picked the
Article model. The
serializer_class will tell what to accept as arguments for requests and what to show when something needs to get consumed from the api.
At last, we need to add a URL to send requests to. Let’s open (or create) the
urls.py and add the following:
from django.conf.urls import url, include from rest_framework import routers from yourappname.views import ArticleViewSet router = routers.DefaultRouter() router.register(r'articles', ArticleViewSet) urlpatterns = [ url(r'^', include(router.urls)) ]
In the above snippet, create a
DefaultRouter instance. This is basically Django Rest Framework’s way of telling how to create urls for their
ModelViewSet classes. Up next, we register the
ArticleViewSet we created in our
views.py and give a url (in this case this will create
/api/articles/ and
/api/articles/<id>/). Then we add the router urls to the
urlpatterns.
Don’t forget to run
python manage.py makemigrations && python manage.py migrate to migrate the model we created and the default django rest framework models
Start Django again with
python manage.py runserver and go to in your favorite webbrowser. Voila! We have now a cool API endpoint. You will probably see something similar to this:
You can now also do
curl in your terminal/shell. It will respond with
[] as there are no items in it just yet. is now one of the endpoints you can consume to create and list articles. You can also look up specific articles by using<pk>/. Replace
<pk> with the ID of your article.
Django Rest API Settings
You can customize the API quite easily. Simply go to your
settings.py file and add the following dictionary:
REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', 'rest_framework.authentication.SessionAuthentication' ), 'DEFAULT_PAGINATION_CLASS': None, 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework.renderers.JSONRenderer', ), }
In this dictionary, you can add various Django Rest Framework settings. Here are a few that are commonly used
DEFAULT_AUTHENTICATION_CLASSES
The
DEFAULT_AUTHENTICATION_CLASSES is a setting that you can use to set the authentication for what users can use to authenticate themselves. For example: you can add
rest_framework.authentication.TokenAuthentication to only allow people with a valid token to view or make changes to the API.
A full example of that would be:
REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', 'rest_framework.authentication.SessionAuthentication' ), }
In the above example, we allow users to use either a token or a session to access our API. It will first check for a token and then check for a session in this case.
DEFAULT_RENDERER_CLASSES
The
DEFAULT_RENDERER_CLASSES is a setting that allows you to select how the endpoints are shown to users. As you have seen before, we can now see the API visually with a nice GUI. You can disable that with this setting. Here is a full example of that:
REST_FRAMEWORK = { 'DEFAULT_RENDERER_CLASSES': ( 'rest_framework.renderers.JSONRenderer', ), }
In the above example, we disable the GUI and only show JSON responses.
DEFAULT_PERMISSION_CLASSES
With the
DEFAULT_PERMISSION_CLASSES you can assign custom permissions to your API. A good usecase would be when you have different kind of users. Let’s say you have a publisher and reader. A publisher should be able to read articles. A reader should not. We can easily deny requests with a permissions file (more on this in the next chapter).
REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASS': ('yourappname.permissions.AdminPermission',) }
Permissions
Let’s create a new file called
permissions.py. This will be our file to define permission classes with. A permission could look like this:
from rest_framework import permissions class AdminPermission(permissions.BasePermission): def has_permission(self, request, view): if request.user.is_authenticated() and request.user.is_publisher: return True return False
The above snipped will deny any requests from users that are not authenticated and users that are not publishers.
Some helpful extra packages you might want to consider
This package will install some other dependencies as well. Then you can extend the package a bit more with the following packages. Those are all optional.
CoreAPI (
pip install coreapi)
This is a package that will allow you to consume your API as objects instead of standard JSON. If you are not familiar with CoreAPI, then it’s best to ignore this package for now.
MarkDown (
pip install markdown)
MarkDown itself is known for it’s ease to style text documents. This can be used to document your API. If you are not planning on documenting your API, then there is no need to install this package.
django-filter (
pip install django-filter)
Django-filter is a library that let’s you filter out fields that you actually don’t want to transfer to your frontend. This can be used to save bandwidth and only send data that you are actually going to use. You can limit the amount of data is send by using serializers, but this package allows you to quickly filter models instead of creating a new serializer.
django-crispy-forms (
pip install django-crispy-forms)
This package is really useful when you are using
django-filter as well. With Crispy Forms you can help with displaying filter options in html forms when you are using the GUI interface.
drf_yasg (
pip install drf_yasg)
This is a very simple library that will build API documentation automatically. This is best to use when you are building an API for users/customers.
|
https://djangowaves.com/resources/django-rest-api/
|
CC-MAIN-2019-39
|
refinedweb
| 1,713
| 58.69
|
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove
Ads Via DevMavens
One of the things I wanted to show at my Interactive session on unit testing tips and tricks at TechEd this year was how you can "Stub" out results from LINQ Queries, or mock\stub Extension methods in .NET 3.5 (what's the difference between mocks and stubs?)
The only mocking framework in existence that can truly stub out LINQ queries and extension methods is TypeMock.NET. Another option, if you use LINQ to SQL, is to be able to mock out the DataContext and the IOrderedQueriable interface, as outlined here, to return a custom query result set. I'll talk about the first option in this post, since it is much easier to read and understand. I may touch on the second one in a later post.
TypeMock, I've talked about it before, is a very powerful mocking framework. much more so than Rhino Mocks or NMock, because it allows isolating static methods, private methods, constructors, and basically anything you can do in IL, because it uses the .NET Profiler APIs to intercept method calls and do whatever it wants with them. In that regard, it is almost too powerful because it rids of of the need to actually design your code for testability, and just test it as is (how you actually write the tests is another matter for another post).
But I'd rather have the argument whether TypeMock is good or bad, than not have TypeMock on the scene at all, simply because it's just too damn valuable as is (disclosure: TypeMock is owned by a friend of mine).
It's the only real framework that can deal with real untestable legacy code and still allow you to isolate classes enough to test them without changing the code under test itself.
It's the only framework that allows to write unit tests for code that is based on .NET 3.5 and uses LINQ Queries and extension methods, allowing to isolate these two types of code and thus be able to separate things from their dependencies.
Here are some examples I wanted to show (takes fro the examples that come with the TypeMock download:
Example: Stubbing out LINQ Query Results
First, we're going to create one fake result to return (array) from the LINQ query, and also introduce the real data source from qhich to query (realCustomerList). note that the customer list has 3 entries, and the fake list has only two entries.
1: ///Two Customer instances used as fake values
2: private static Customer fakeCustomer1 = new Customer { Id = 0, Name = "Fake1", City = "SF" };
3: private static Customer fakeCustomer2 = new Customer { Id = 1, Name = "Fake2", City = "Redmond" };
4:
5: /// A fake list used as return value in the tests
6: private List<Customer> fakeCustomers
7: = new List<Customer> {fakeCustomer1,fakeCustomer2 };
8:
9: /// A list containing 3 cusotmer use as the "real data"
10: private List<Customer> realCustomerList = new List<Customer> {
11: new Customer{ Id = 1, Name="Dave", City="Sarasota" },
12: new Customer{ Id = 2, Name="John", City="Tampa" },
13: new Customer{ Id = 3, Name="Abe", City="Miami" }
14: };
Here's how the unit test looks:
1: [TestMethod]
2: public void MockSimpleQuery()
3: {
4: using (RecordExpectations r = new RecordExpectations())
5: {
6: var answer = from c in realCustomerList select c;
7: r.Return(fakeCustomers);
8: }
9:
10: var actual = from c in realCustomerList select c;
11:
12: Assert.AreEqual(2, actual.Count<Customer>());
13: Assert.IsTrue(actual.Contains(fakeCustomer1));
14: Assert.IsTrue(actual.Contains(fakeCustomer2));
15: }
Line 6 is the one that tells the TypeMock record what LINQ query to intercept.
Line 7 tells TypeMock what value to return when this query is intercepted. that means the query will never really take place.
finally, we assert that what we got from the query is the fake list of customers. Pretty darn easy.
But is this a good test? Not really. this is only a demo of what you code intercept with typeMock. a real unit test will actually test a piece of code that runs this query, and, instead of providing it with a list of objects to query, will just provide it with the fake return value to receive
Example: Test that an anonymous type is created correctly
1: [TestMethod]
2: [VerifyMocks]
3: public void MockAnonymousTypeTest()
4: {
5: using (RecordExpectations rec = new RecordExpectations())
6: {
7: //Mock the creation of the Anonymous Type
8: var p1 = new { Name = "A", Price = 3 };
9: rec.CheckArguments();
10:
11: //fake the value of the Name Property
12: rec.ExpectAndReturn(p1.Name, "John");
13: }
14: //if creation will be done diffrently an exception will be thrown.
15: var target = new { Name = "B", Price = 3 };
16: //verify that the fake value is returned
17: Assert.AreEqual("John", target.Name);
18: }
Line 8 tells the typemock recorder that this is the anonymous type creation we'd like to intercept.
Line 9 tell it to assert internally that the anonymous type is indeed created with the correct "Name" property value and the correct "Price" property value. so if I later initialize the anonymous type with the wrong values, I will get a test exception based on the expected values. Pretty neat.
Line 12 is just an example of how you can also intercept and retun whatever value you'd like from the property of an anonymous type. in this case the name will always return "John" even though it may have been initialized differently.
Example: Mocking Extension Methods
Extension methods are static methods, which means you can't replace them with a different method instance. there are ways to design yoru code so that the calls to the static methods are testable, but assuming you're testing code that cannot be changed, or that is hard to redesign, intercepting the method call itself is almost the only choice.
assume you've extended the Point class by adding this extension method:
1: public static class Extend
2: {
3: public static Point Multiply(this Point extendedInstance, int scalar)
5: extendedInstance.X *= scalar;
6: extendedInstance.Y *= scalar;
7: return extendedInstance;
10: }
Now you can write a test like this:
3: public void MockExtensionMethod()
7: Point mocked = new Point(7, 9);
8: mocked.Multiply(6);
9: rec.Return(new Point(5, 5));
11:
12: Point target = new Point(7, 9);
13: Point actual = target.Multiply(6);
14:
15: //Verify the returned vcalues
16: Assert.AreEqual(5, actual.X);
17: Assert.AreEqual(5, actual.Y);
in line 7 we're actually using the extension method as part of our recorder, and telling the record to return a specific value. then we later use that value in our code under test (like 13) and assert that we god the mocked value instead of the real one (the extension method is not executed).
What about other technologies?
could you use this to test code that relies on other non testable technologies, like windows workflow, WCF etc? the answer is a big yes. the same principles apply, and most of these frameworks were not built with testability in mind.
it's in those cases where people ask "how can I test such code where everything is sealed, private and static?" that I tell them that, currently, TypeMock is the only framework that solves this problem at a satisfactory result.
Thanks Roy. I was a one who ask question about mocking sealed classes on "Unit Testing Tips and Techniques with Visual Studio 2008 and the NET Framework" presentation.
Pingback from Reader Highlights (26/08): Silverlight, CEP and more « Tales from a Trading Desk
Happy new year :) A new year is a great time to start off new things, or to renew old commitments that
|
http://weblogs.asp.net/rosherove/archive/2007/11/17/mocking-linq-queries-extension-methods-and-anonymous-types.aspx
|
crawl-002
|
refinedweb
| 1,288
| 57.91
|
Zaqar/specs/api/v1
Marconi API: v1 Blueprint
See also: marconi/specs/grizzly
Decisions
- Consider splitting the API into two namespaces, one for work queuing and one for eventing
- Allows for different scaling architectures, storage backends, etc.
- Simplifies semantics for the two major use cases
- Downside is we remove affordances - clients won't be able to mix/match semantics (we would be effectively removing features)
- Consider defining top-level queues instead of just using tags
- Makes it easier to layer on top of existing message queuing systems / use them as backing stores.
- Aligns with people's mental model of queuing systems
- Makes RBAC more natural
- Could be implemented as syntactic sugar on top of tags - maybe just leave that decision to the storage driver?
- Decide whether message signing should be a mandatory or optional feature (probably optional)
- Decide whether client state should be based on timestamps or markers (or make one optional)
- Markers assume some kind of monotonic message IDs and/or a sequential storage schema.
- Timestamps can be easier to scale, but are prone to collisions, and this complicates client code
- Decide whether Marconi should be responsible for cross-availability zone and even cross-DC replication/availability
- Do we need to support XML?
To Do
- Clarify use cases around tags
- Flesh out examples, incomplete sections (obviously)
- Clean up this document, add a FAQ section
Overview
Marconi provides an HTTP-based API in the spirit of the REST architecture style.
REST is a natural fit for Marconi's design philosophy, shamelessly borrowed from Donald A. Norman's work regarding The Design of Everyday Things:
The value of a well-designed object is when it has such a rich set of affordances that the people who use it can do things with it that the designer never imagined.
This guide assumes the reader is familiar with REST, HTTP/1.1, JSON and XML. URI templates are specified according to RFC 6570.
Common API Elements
HTTPS
All requests to authenticate and operate against the API in production environments use SSL/TLS over HTTP (HTTPS) on TCP port 443. Web servers should only accept high-quality cipher suites, such as AES256_SHA and ECDHE_RSA_AES256_SHA. If hardware acceleration (e.g., AES_NI) is not available, RC4_SHA and ECDHE_RSA_RC4_SHA may be used with some discretion.
Clients
- Clients should follow HTTP redirects
- Clients should advertise gzip support
- Clients must identify themselves in the UserAgent request header; e.g., "User-Agent: python/2.7 cloudthing/1.2"
- Clients must not hard-code URI paths or templates since they may change over time. Instead, clients should cache links and link templates provided by the API.
API Versioning
The Marconi API uses a URI versioning scheme. The first element of the path contains the target version identifier, e.g.:
The URI version will only be incremented to accomodate major new features or API redesigns that can not be made backwards-compatible. When new API versions are released, older versions are deprecated. Marconi maintainers will work with developers and partners to ensure there is adequate time to migrate to new versions before deprecated ones are discontinued.
Since the base URI is only incremented to accommodate major API revisions, sub-versioning of the API is meaningless and is therefore not used. For example, the next version after "v1" would be "v2", not "v1.1" or some variant thereof. (HATEOS and media types are used in lieu of minor, URI-based versioning.)
Resource Media Types
The API supports `application/json` and `application/xml`, encoded as UTF-8. Gzip'd requests are optionally supported by the server.
Unrecognised protocol elements should be ignored. This includes XML elements and attributes, JSON object properties, link relation types, media types, etc.
TODO: Define and register message media type/ messages media type -
Authentication
All requests to the API may only be performed by an authenticated agent.
To authenticate, an agent issues an authentication request to a Keystone Identity Service endpoint.
In response to valid credentials, Keystone responds with an auth token and a service catalog that contains a list of all services and endpoints available for the given token. Multiple endpoints may be returned for Marconi according to physical locations and performance/availability characteristics of different deployments.
Marconi clients must specify a valid auth token in the `X-Auth-Token` header for each request to the Marconi API. The API validates auth tokens against Keystone before servicing each request.
Authorization
TBD: The API needs to verify read/write access to all endpoints according to the provided auth token.
Request Signing
Messages may optionally contain a signature to verify its authenticity.
TODO: How to register and query public keys? Via Keystone? Shouldn't be part of Marconi, in any case.
How to calculate signature?
SecP256K1 or something using RSA-2048? (encrypt a SHA-512 value). Would be nice to allow different schemes... How is Ceilometer doing it?
See also:
Tenant ID
Auth tokens are only valid for a particular tenant ID, which should be reflected in the service endpoints retrieved from Keystone. Agents use the given endpoint as a home document href from which to discover links to all other API requests.
An example endpoint:
The client chooses one of the presented endpoints and uses it as the base URL for all subsequent requests.
Caching
@todo
Can be used to save bandwidth at the expense of server CPU and/or added complexity. Provide an ETag for results, the client submits in subsequent request using the If-None-Match header. Server performs query against data store, dynamically generates ETag, compares to client's ETag, and returns 204 if-and-only-if the two ETags don't match. This is especially helpful in cutting down on duplicate traffic due to the inability of the server to guarantee message ordering.
Errors
If any request results in an error, the server will return an appropriate 4xx or 5xx HTTP status code, as well as the following information in the body:
- Title
- Description
- Internal code
- Link to more information
Example:
HTTP/1.1 400 Bad Request Content-Type: application/json { "title": "Unsupported limit", "description": "The given limit cannot be negative, and cannot be greater than 50.", "code": 1092, "link": { "rel": "api-doc", "href": "", "text": "API documentation for the limit parameter" } }
Common Headers
Each request to the API must include certain standard and extended HTTP headers. These headers provide host, agent, authentication and other pertinent information to the server.
Sample API Request
GET /v1/480924/messages?tags=channel-foo,topic-bar&ts=1355237242783&limit=10 HTTP/1.1 Host: marconi.example.com User-Agent: python/2.7 killer-rabbit/1.2 uuid/30387f00-39a0-11e2-be4d-a8d15f34bae2 Date: Wed, 28 Nov 2012 21:14:19 GMT Accept: application/json Accept-Encoding: gzip X-Auth-Token: 7d2f63fd-4dcc-4752-8e9b-1d08f989cc00
Get Home Document
Request Template
GET {base_url}
Request
GET /v1/480924 HTTP/1.1 Host: marconi.example.com [...]
Response
@todo
Discussion
The entire Marconi API is discoverable from a single starting point; agents do not need to know any more than this one URI in order to explore the entire API.
See also:
Get a Specific Message
Request Template
GET {base_url}/messages/{messageId}
Request
GET /v1/480924/messages/50b68a50d6f5b8c8a7c62b01 HTTP/1.1 Host: marconi.example.com [...]
Response
HTTP/1.1 200 OK [...] { "id":" } }
Discussion
@todo Define message fields.
Get Messages
Request Template
GET {base_url}/messages{?tags,ts,limit,sort,audit,echo}
Request
GET /v1/480924/messages?tags=foo,bar,bang&ts=1355237242783&limit=10&sort=desc HTTP/1.1 Host: marconi.example.com [...]
Response
HTTP 200 if the query matched 1 or more messages, HTTP 204 otherwise (with no body).
HTTP/1.1 200 OK ETag: "2:fa0d1a60ef6616bb28038515c8ea4cb2" [...] { "messageCount": 1, "links": [ { "rel": "self" "href-template": "{base_url}/messages?tags=foo,bar,bang&ts=1355237242783&limit=10&sort=desc" }, { "rel": "next" "href-template": "{base_url}/messages?tags=foo,bar,bang&ts=1355237244190&limit=10&sort=desc" } ], "messages": [ { "id": 50b68a50d6f5b8c8a7c62b01, "href-template": "{base_url}/messages/{id}" "userAgent": "python/2.7 killer-rabbit/1.2 uuid/79ed56f8-2519-11e2-b835-acf6018e45a1", "ts": 1355237244190 " } } ] }
Example audit messages response (if a client were to add audit=true to the sample query).
HTTP/1.1 200 OK ETag: "2:fa0d1a60ef6616bb28038515c8ea4cb2" [...] { "messageCount": 2, "links": [ { "rel": "self" "href-template": "{base_url}/messages?tags=foo,bar,bang&ts=1355237242783&limit=10&sort=desc&audit=true" }, { "rel": "next" "href-template": "{base_url}/messages?tags=foo,bar,bang&ts=1355237244210&limit=10&sort=desc&audit=true" } ], "messages": [ { "id": 10b00a50d6f5b8c8a7c62ccc, "href-template": "{base_url}/messages/{id}" "userAgent": "marconi/1 uuid/f2e4b36a-3f05-11e2-b71d-7823d2b0f3ce", "ts": 1355237242783, "age": 790, "tags": [ foo, snap, bar, bang ], "body": { "event": "MessageCreated", "timestamp": "2012-12-04 16:53:20Z", "details": { "messageId": ", } } }, { "id": 10b00a50d6f5b8c8a7c62ccb, "href-template": "{base_url}/messages/{id}" "userAgent": "marconi/1 uuid/f2e4b36a-3f05-11e2-b71d-7823d2b0f3ce", "ts": 1355237244210, "age": 792, "tags": [ foo, snap, bar, bang ], "body": { "event": "LockExpired", "timestamp": "2012-12-04 16:53:18Z", "details": { "messageId": "50b68a50d6f5b8c8a7c62a87", "userAgent": "python/2.7 killer-rabbit/1.2 uuid/79ed56f8-2519-11e2-b835-acf6018e45a1", } } } ] }
Discussion
Message IDs are opaque strings; clients should make no assumptions on their format or length, except that IDs may not exceed 50 characters in length.
tags is a list of up to n message tags, where n >= 2. The maximum number of tags supported is configurable. The API will return only those messages containing ALL of the specified tags. If no tags are specified, all messages are returned.
limit specifies up to x messages to return. Note that x is configurable, but must be at least 10. If not specified, limit defaults to x. When more messages are available than can be returned in a single request, the client can pick up the next batchby simply using the URI template parameters returned from the previous call in the "next" field (TBD).
ts is the timestamp (64-bit UNIX timestamp in milliseconds) from the last message the client saw (relative to the server - use the "ts" message field). The API will return messages that were enqueued after the specified time, minus t milliseconds, where t is an implementation-defined number of milliseconds within which the server cannot guarantee message ordering (admittedly, a leaky abstraction). The client must cache messages for t milliseconds in order to check for duplicates returned in subsequent requests. Note that the message having the exact given timestamp (ts) will probably be part of the result set unless that particular message has expired or an ETag submitted by the client allows the server to definitively determine that no new messages match the given query string since the last request.
Note: If
ts is not specified, the API will return all messages.
sort specifies how to chronologically order results. Use "asc" or "desc" for ascending or descending, respectively. If sort is not given, the API will return results in ascending order (oldest message first).
audit is a boolean value (i.e., "true" or "false") that determines whether the API will return actual messages or audit messages. Audit messages are "messages about messages", and are automatically generated by the server. Clients may query for audit messages to audit business processes or to diagnose data flow issues. The default value for
audit is "false".
echo is a boolean (i.e., "true" or "false") that determines whether or not the API will return a client's own messages, as determined by the uuid portion of the User-Agent header. If not specified, echo defaults to "false".
Post Messages
Request Template
POST {base_url}/messages
Request
POST /v1/480924/messages HTTP/1.1 Host: marconi.example.com [...] [ { "ttl": 10, "durability": 3, "tags": [420D29D6-3F24-11E2-BC14-7823D2B0F3CE, checkpoint] ": "BackupStarted", "backupId": "c378813c-3f0b-11e2-ad92-7823d2b0f3ce" } }, { "tags": [420D29D6-3F24-11E2-BC14-7823D2B0F3CE, progress] "body": { "event": "BackupProgress", "currentBytes": "0", "totalBytes": "99614720" } } ]
Responses
When a single message is submitted:
HTTP/1.1 201 Created Location:
When multiple messages are submitted:
HTTP/1.1 201 Created Location:
Discussion
One or more messages may be submitted in a single request, but must always be encapsulated in a collection container (an array in JSON, a parent tag in XML). In the case of a batch POST, querying the returned
Location may return messages posted concurrently by other agents.
ttl is the number of seconds the server will keep a message before automatically deleting it. Should be long enough to give all observers ample time to retrieve the message. If not specified, defaults to the maximum default set for any of the message's associated tags, or the default set for the tenant (TBD), or the one configured in the deployment (?).
tags is a list of up to n tags to associate with a given message, where n >= 2. The maximum number of tags supported is configurable (the default is 5). The maximum length of each tag is likewise configurable (with a default of 150 characters). Tags are case-sensitive.
signature - TBD, probably something like sign(hash(salt + payload))
body specifies a custom document which constitutes the body of the message being sent. The size of this body, in characters and including whitespace, is configurable (the default is 64 KiB).
durability requests a certain durability guarantee from the server. The purpose of this parameter is to allow clients to dynamically make tradeoffs between durability and cost/performance depending on the type of message being sent. Note that the maximum durability level supported by the server is configurable; not all deployments will support all levels. If a level is unsupported, the server will return 400 Bad Request. If a level is supported but the server is unable to complete the request, and appropriate 5xx error will be returned to the client.
The following levels are defined:
Note that higher durability levels assume the guarantees (if any) of all lower levels.
Delete a Single Message
Request Template
DELETE {base_url}/messages/{message_id}{?lock_id}
Request
DELETE /v1/480924/messages/50b68a50d6f5b8c8a7c62b01 HTTP/1.1 Host: marconi.example.com [...]
Response
HTTP/1.1 204 No Content
Discussion
message_id specifies the message to delete.
lock_id specifies that the message should only be deleted if it has the specified transactional lock, and that lock has not expired. This is useful for ensuring only one agent processes any given message; whenever a worker client's lock expires before it has a chance to delete a message it has processed, the worker must roll back any actions it took based on that message, since another worker will likely process the same message as part of a different transaction.
Delete Several Messages
Request Template
DELETE {base_url}/messages{?tags,all,lock_id}
Request
DELETE /v1/480924/messages?tags=420D29D6-3F24-11E2-BC14-7823D2B0F3CE HTTP/1.1 Host: marconi.example.com [...]
Response
HTTP/1.1 200 OK [...] { "messageCount": 43 }
Discussion
tags specifies that only messages having the given tags will be deleted. To avoid accidentally deleting all messages for a given tenant, if
tags is not specified,
all must equal "true"; otherwise, no messages will be deleted and the server will return 400 Bad Request. Tags are case-sensitive.
all is a boolean value (i.e., "true" or "false") that determines whether all messages will be deleted in the case that no tags are specified. If tags are specified, however, the server simply ignores
all.
lock_id specifies that messages should only be deleted if they have the specified transactional lock, and that lock has not expired.
The server returns the number of messages deleted.
Lock Messages
Request Template
POST {base_url}/messages/locks{?tags,limit}
Request
POST /v1/480924/messages/locks?tags=686897696a7c876b7e&limit=5 [...] { "ttl": 30 }
Response
The client receives a lock ID and a list of locked messages, if any:
@todo { "lock": { "id": "9206ebcc0939d18c351e994555de97b9", "ttl": 30, "age": 0 }, "messages": [ ... ] }
Discussion
Locks a set of messages, up to limit, from oldest to newest, skipping any that are already locked.
Lock TTL should be <= message TTLs. If it is not, the messages will expire and be garbage collected before the lock expires, which is not what clients will expect. (Should this constraint be enforced server-side?)
limit specifies up to x messages to lock. Note that x is configurable, but must be at least 10. If not specified, limit defaults to x.
Check the Status of a Lock
Request Template
GET {base_url}/messages/locks/{lock_id}
GET /v1/480924/messages/locks/9206ebcc0939d18c351e994555de97b9 [...]
Response
{ "id": "9206ebcc0939d18c351e994555de97b9", "ttl": 30, "age": 17 }
Discussion
This query may be used to check the age of a given lock (relative to the server's clock).
Renew a Lock
Request Template
PATCH {base_url}/messages/locks/9206ebcc0939d18c351e994555de97b9
Request
PATCH /v1/480924/messages/locks/9206ebcc0939d18c351e994555de97b9 [...] [ { "replace": "/age", "value": 0 } ]
Response
HTTP/1.1 204 No Content
Discussion
Clients should periodically renew locks during long-running batches of work to avoid loosing a lock in the middle of processing a message. The server only accepts "0" for the age value.
Count Messages
Request Template
HEAD /{version}/{tenant}/messages{?tags,ts,audit,echo}
Request
HEAD /v1/480924/messages?tags=foo,bar,bang&ts=1355237242783 HTTP/1.1 Host: marconi.example.com [...]
Response HTTP/1.1 200 OK
{ "messageCount": 46 }
Discussion
See Get Messages for definitions of the query parameters.
Check Health
Request Template
GET {base_url}/health
or
HEAD {base_url}/health
Request
GET /v1/480924/health Host: example.marconi.com [...]
Response
HTTP/1.1 200 OK [...] { "code": "green", "link": { "rel": "status", "href": "", "text": "Service status page" } }
Discussion
Use this request to check on the Marconi service status as a whole. The following status values are defined:
Note that if a client performs HEAD instead of GET, the server will return one of the following:
Poll Stats
@todo - also how authenticate?
Set Message Defaults
Request Template
PUT {base_url}/config { "message": { "ttl": {ttl}, "durability": {durability}, "lock": { "ttl": {lock_ttl} } } }
Request
PUT /v1/480924/config HTTP/1.1 Host: marconi.example.com [...] { "message": { "ttl": 120, "durability": 1, "lock": { "ttl": 60 } } }
Response
HTTP/1.1 204 No Content
Discussion
Set per-tenant default message options, such as ttl and durability. Lock TTL must be <= message TTL.
Get Message Defaults
Request Template
GET {base_url}/config
Request
GET /v1/480924/config HTTP/1.1 Host: marconi.example.com [...]
Response
HTTP/1.1 200 OK [...] { "message": { "ttl": 120, "durability": 1, "lock": { "ttl": 60 } } }
Discussion
Get per-tenant default message options, such as ttl and durability. If not set explicitly by a client, a default configuration is used, and can vary between API deployments.
TBD
Request Template
GET /{version}/{tenant}
Response
@todo
Discussion
@todo
|
https://wiki.openstack.org/w/index.php?title=Zaqar/specs/api/v1&oldid=13771
|
CC-MAIN-2019-39
|
refinedweb
| 3,041
| 56.45
|
June 1, 2020 Single Round Match 729 Editorials
BrokenChessboard
Used as: Division Two – Level One:
If we determine the color of the upper-left corner of the grid others will be uniquely determined. So we have only two different valid colorings.
For a fixed coloring, let’s count how many cells must be changed such that we reach the state. It’s simple using comparing the cells one by one.
Code by dhruvsomani:
def minimumFixes(self, tup): a = 0 b = 0 for i in range(len(tup)): for j in range(len(tup[0])): if (((i + j)%2) and tup[i][j] == 'W') or (((i + j + 1)%2) and tup[i][j] == 'B'): a += 1 else: b += 1 return min(a, b)
SoManyRectangles
Used as: Division Two – Level Two:
The observation is to think about when we add a new rectangle to the plane, how many regions added? When we add the first rectangle, it adds only one new region. The second rectangle can increase the number of regions to four. The i-th region adds at most i new regions.
So at the total, we have at most n * (n + 1) / 2 regions.
So we can iterate all of the regions and count how many rectangles cover this area. This leads to O(n^3) time complexity.
Now to find all of the possible regions, iterate on different x’s and different y’s. There is at most 2n different x’s in the input and 2n different y’s in the input.
For example if the input is {0, 0, 10, 20, 30}, {0, 0, 0, 0, 0}, {100, 10, 20, 30, 40}, {1, 1, 1, 1, 1}, the x’s we iterate are {0, 0, 10, 20, 30, 100, 10, 20, 30, 40} and y’s we iterate are {0, 0, 0, 0, 0, 1, 1, 1, 1, 1}. For example for the point (10, 0) rectangles {1, 3} cover it.
Another strategy is sweep line. Sweeping from left to right, adding rectangles when sweep line reaches x1[i] and removing it when it reaches x2[i]. This way at each moment we have track of segments that present rectangles create on y-axes.
#define FOR(i,a,b) for (int i=a;i<=b;++i) int X[11111],Y[11111],n,XS,YS,ans; class SoManyRectangles{ public: int maxOverlap(vector <int> x1, vector <int> y1, vector <int> x2, vector <int> y2){ n=x1.size(); XS=YS=0; FOR(i,0,n-1){ X[++XS]=x1[i]; X[++XS]=x2[i]-1; Y[++YS]=y1[i]; Y[++YS]=y2[i]-1; } ans=0; FOR(i,1,XS) FOR(j,1,YS){ int x=X[i],y=Y[j]; int s=0; FOR(j,0,n-1) if (x>=x1[j] && x+1<=x2[j] && y>=y1[j] && y+1<=y2[j]) ++s; ans=max(ans,s); } return ans; } };
RareItems
Used as: Division Two – Level Three:
Consider we are in the middle of choosing. What is important for us? For sure, nothing is important but to keep track of the collection purchased. So let’s define dp[mask] where the mask is the collection seen so far. dp[mask] = expected number of purchases we need to perform to reach complete collection while we have seen the items present in the mask so far.
We have
because in this case, we are finished. The answer is dp[0].
Let’s think of transition. If we have seen items present in mask, there are two possibilities for the next purchase, name it item i:
- It’s present in the mask before.
So we reach dp[mask] again and our state doesn’t change.
- It isn’t present in the mask before.
So we reach dp[mask | 2^i].
Consider
We have
Overall time complexity is O(2^n * n).
long double dp[1100000]; long double a[22]; double expectedPurchases(vector <int> frequency) { int n = frequency.size(); long double sum = 0; for (auto it : frequency)sum += it; for (int i = 0; i < frequency.size(); i++) { a[i] = (long double)frequency[i] / sum; } memset(dp, 0, sizeof(dp)); int m = (1 << n) - 1; dp[m] = 0; for (int sta = m - 1; sta >= 0; sta--) { long double s1 = 0, s2 = 0; for (int j = 0; j < n; j++) { if (!(sta & (1 << j))) { s1 += a[j] * dp[sta | (1 << j)]; } else s2 += a[j]; } s1 += 1; dp[sta] = s1 / (1 - s2); } return dp[0]; }
MagicNumberThree
Used as: Division One – Level One:
We should calculate the number of subsequences of s that their sum is divisible by 3.
What is the maximum sum of a subsequence of s? 50 * 9 = 450.
So the problem could be converted to the knapsack problem. We have several items, we need to count how many subsets of them have the sum equal to k.
Let’s define dp[i][k] = in how many ways we can get sum equal to k if we can only use the first i digits.
dp[0][0] = 1, we can get sum equal to zero without using any of digits.
For i > 0, dp[i][j] = dp[i – 1][j] + dp[i – 1][j – s[i]]. Using 1-based indexing for s. If j – s[i] < 0, ignore it.
Finally, sum up all values of dp[n][k] which k is divisible by 3.
Further step: one can use dp[i][3], just store the sum modulo 3.
Code by 1007:
def countSubsequences(self, s): n = len(s) dp = [1, 0, 0] for i in range(n): x = int(s[i]) dpn = copy.deepcopy(dp) for j in range(3): k = (j + x) % 3 dpn[k] += dp[j] dp = dpn return (dp[0] - 1) % MOD
FrogSquare
Used as: Division One – Level Two:
Consider in the middle of the path, we jump from the cell v to u to t. If u is not on the border of the table, we can swap it with a cell on the border without losing the connection between v and u and, u and t.
The result is simple. We always use border cells as intermediate cells. So we have a graph with something like 4n vertices and 16n^2 edges. Which is small enough to run BFS.
Let’s review again. We use BFS with the cell (sx, sy) as the starting cell. When we want to discover neighbors of the current cell (which is present at the top of the queue), say v, we are only interested in cells which jump is possible from v to them and also they are on the border.
Finally, for each cell on the border, we check if it’s possible to reach (tx, ty) from this cell and update the answer.
Code by nuip:
int dst[1123][1123]; class FrogSquare { public: int minimalJumps(int n, int d, int sx, int sy, int tx, int ty) { if(sx==tx && sy==ty) return 0; auto ok=[&](int x,int y){return x*x+y*y>=d*d;}; if(ok(sx-tx,sy-ty)) return 1; fill(dst[0],dst[1123],MOD); dst[sy][sx]=0; queue<pii> que; que.emplace(sx,sy); auto moveTo=[&](int x,int y,int X,int Y){ if(ok(x-X,y-Y)){ if(MN(dst[Y][X],dst[y][x]+1)) que.emplace(X,Y); } }; while(que.size()){ int x,y; tie(x,y)=que.front(); que.pop(); if(ok(x-tx,y-ty)) return dst[y][x]+1; rep(i,n){ moveTo(x,y,0,i); moveTo(x,y,i,0); moveTo(x,y,n-1,i); moveTo(x,y,i,n-1); } } return -1; } };
XorAndLIS
Used as: Division One – Level Three:
Let’s reform the operation and make another operation. We want to prove that for every i < j, it’s possible to set x[j] = x[i] ^ x[j].
For example to make x[2] ^= x[0], we do x[2] ^= x[1], x[1] ^= x[0], x[2] ^= x[1], x[1] ^= x[0].
We can prove it by induction. Consider we can do x[j] ^= x[i] for j – i = k – 1, we want to prove that it’s possible to do x[j] ^= x[i] for j – i = k.
First, using assumption, do x[j] ^= x[i + 1]. Then x[i + 1] ^= x[i]. Again using assumption, x[j] ^= x[i + 1] and finally x[i + 1] ^= x[i] to undo changes on i + 1. This way we proved that it’s possible for every i < j to make x[j] ^= x[i] without changing any other values.
Using the new operation we can make x[i] ^= S which S = xor sum of a subset of indexes less than i.
We use the typical approach for solving LIS problems. Consider all of the increasing subsequences with length j we can make using the first i elements (maybe by applying some changes), we store the minimum value of the last element of these subsequences in dp[i][j].
dp[i][j] = min(dp[i – 1][j], X) where X is the minimum possible value of x[i] ^ S > dp[i – 1][j – 1] (using 1-based indexing) where S = xor sum of a subset of indexes less than i.
To find the minimum possible value of x[i] ^ S > dp[i – 1][j – 1], we use guassian elimination.
Time complexity is O(N^2B) where B = number of bits = 60.
Code by Um_nik:
typedef long long ll; const ll INF = (ll)3e18; const int N = 105; const int K = 60; int n; ll dp[N][N]; ll a[K]; ll getBiggest(ll x, int k) { for(; k >= 0; k--) { if (a[k] == -1) continue; if (((x >> k) & 1) == 0) x ^= a[k]; } return x; } ll solve(ll x, ll bound) { bool gr = false; for (int k = K - 1; k >= 0; k--) { if (gr) { if (a[k] == -1) continue; if ((x >> k) & 1) x ^= a[k]; continue; } if (a[k] == -1) { if (((x >> k) & 1) == ((bound >> k) & 1)) continue; if ((x >> k) & 1) { gr = true; continue; } return INF; } if ((bound >> k) & 1) { if (((x >> k) & 1) == 0) x ^= a[k]; continue; } if ((x >> k) & 1) x ^= a[k]; if (getBiggest(x, k - 1) < bound) { x ^= a[k]; gr = true; } } if (x < bound) throw; return x; } void addToGauss(ll x) { for (int k = K - 1; k >= 0; k--) { if (((x >> k) & 1) == 0) continue; if (a[k] != -1) { x ^= a[k]; continue; } a[k] = x; return; } } class XorAndLIS { public: int maximalLength(vector<ll> val) { n = (int)val.size(); for (int i = 0; i <= n; i++) for (int j = 0; j <= n; j++) dp[i][j] = INF; for (int i = 0; i < K; i++) a[i] = -1; dp[0][0] = 0; for (int i = 0; i < n; i++) { for (int j = 0; j <= i; j++) { if (dp[i][j] == INF) break; dp[i + 1][j] = min(dp[i + 1][j], dp[i][j]); ll x = solve(val[i], dp[i][j]); dp[i + 1][j + 1] = min(dp[i + 1][j + 1], x + 1); } addToGauss(val[i]); } int ans = n; while(dp[n][ans] == INF) ans--; return ans; } };
a.poorakhavan
Guest Blogger
|
https://www.topcoder.com/blog/single-round-match-729-editorials/
|
CC-MAIN-2022-40
|
refinedweb
| 1,860
| 77.57
|
hi,
im busy writing a c program that will allow a shop keeper to enter item prices, check if items are vat or not, calculate vat as well as check if they are members and print out reciepts etc but with all my else statements there is this error where it says "syntax error c2143 missing ; before { but i cant put in a ; because it will make the statement empty! please help!!!
here is the code:
/* this program will allow user to enter price, check if its a VAT item or not, calculate tax if VAT,
calculate final bill, check if customer is a member, and if so calculate discount, and calculate any
change due.*/
#include <stdio.h>
#include <conio.h>
int main()
{ /* declaring and initializing variables*/
double vat = 0.10;
double mem_discount = 0.05;
double item_price;
double tax = 0;
double total_tax = 0;
double subtotal = 0;
double final_bill = 0;
double mem_final_bill = 0;
double money_paid;
double change_due;
double discount = 0;
int member;
int non_member;
char vat_item;
char nonvat_item;
char y;
char n;
printf("welcome to De Corner Variety Shop\n");
printf("enter item price, type -1 to end\n");
scanf("%lf", &item_price);
while (item_price != -1) /*while loop to allow user to enter as many items as they want and to enter a sentinel value when finished*/
{
subtotal += item_price; /*calculates running subtotal of all goods*/
printf("enter y if item is a VAT item and n if item is a non vat item\n");
scanf("%c", &vat_item);
scanf("%c", &nonvat_item);
if (vat_item == y)
{
tax = item_price * vat; /* calculates the vat owed on item*/
total_tax += tax; /* calculates the total tax of all goods*/
} // end if statement
} // end while loop
final_bill = subtotal + total_tax;
printf(" the final bill before discount (if applicable) is %.2lf\n", final_bill);
printf("enter 1 if customer is a member, 2 if a non member\n");
scanf("%d", &member);
scanf("%d", &non_member);
if (member == 1)
{
discount = final_bill * mem_discount;
mem_final_bill = final_bill - discount;
printf("the total is $ %.2lf", mem_final_bill);
printf("how much did the member pay?\n");
scanf("%lf", &money_paid);
if (money_paid > mem_final_bill)
{
change_due = money_paid - mem_final_bill;
printf("$ %.2lf is owed to the customer\n", change_due);
printf(" thank you and have a great day!\n");
}
}// end if statement}// end if statementCode:else (money_paid == mem_final_bill) { printf("no change is owed to the customer\n"); printf("thank you and have a great day!\n"); }
if (money_paid > final_bill)if (money_paid > final_bill)Code:else(non_member == 2) { printf("the total is $ %.2lf\n", final_bill); printf("how much did the customer pay?\n");
{
change_due = money_paid - final_bill;
printf("$ %.2lf is owed to the customer\n", change_due);
printf(" thank you and have a great day!\n");
}
} // end else statement} // end else statementCode:else (money_paid == final_bill) { printf("no change is owed to the customer\n"); printf("thank you and have a great day!\n"); }
getchar();
return 0;
} // end main function
any help really appreciated!!!
|
http://cboard.cprogramming.com/c-programming/121608-c2143-errors.html
|
CC-MAIN-2015-27
|
refinedweb
| 470
| 55.88
|
I.
For OO I decided to use Tkinter, because it doesn't require any additional soft installation (it is included in Python distribution).
I had took a look at AMPL and GAMS GUIs before starting my own one, and of course I have no intention to duplicate Python IDEs.
Running OO GUI is performed via
r = p.manage()
instead of p.solve(), or via
from scikits.openopt import manage
r = manage(p, solver, ...)
Let me also note: manage() can handle named argument start = {False}/True/0/1, that means start w/o waiting for user-pressed "Run".
Currently there are only 3 buttons: "Run/Pause", "Exit" and "Enough".
------------
solver: ralg problem: GUI_example goal: minimum
iter objFunVal log10(maxResidual)
...
102 5.542e+01 -6.10
istop: 88 (button Enough has been pressed)
Solver: Time Elapsed = 1.19 CPU Time Elapsed = 1.12
Plotting: Time Elapsed = 6.86 CPU Time Elapsed = 5.97
objFunValue: 55.423444 (feasible, max constraint = 7.98936e-07)
Let me also note that
- pressing "Exit" before "Enough" and before solver finish will return None, so there will be no fields r.ff, r.xf etc.
- for some IDEs pressing "Exit" doesn't close matplotlib window (if you are using p.plot=1). You should either wait for newer matplotlib version (they intend to fix it) or try to fix it by yourself via choosing correct Agg, see here for details
- OO Doc webpage entry related to OO GUI will be committed later
- you could play with the GUI example by yourself
|
http://openopt.blogspot.com/2008/10/introducing-openopt-gui.html
|
CC-MAIN-2017-30
|
refinedweb
| 253
| 75.61
|
malloc_get_state, malloc_set_state — record and restore state of malloc implementation
Synopsis
#include <malloc.h> void* malloc_get_state(void); int malloc_set_state(void *state);
Description
Note: these function are removed in glibc version 2.25.
The malloc_get_state() function records the current state of all malloc.
Return Value.
Attributes
For an explanation of the terms used in this section, see attributes(7).
Conforming to
These functions are GNU extensions.
Notes
These functions are useful when using this malloc(3) implementation as part of a shared library, and the heap contents are saved/restored via some other method. This technique is used by GNU Emacs to implement its "dumping" function.
Hook function pointers are never saved or restored by these functions, with two exceptions: if malloc checking (see mallopt(3)) was in use when malloc_get_state() was called, then malloc_set_state() resets malloc checking hooks if possible; if malloc checking was not in use in the recorded state, but the caller has requested malloc checking, then the hooks are reset to 0.
See Also
malloc(3), mallopt(3)
Colophon
This page is part of release 5.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
Referenced By
malloc(3).
The man page malloc_set_state(3) is an alias of malloc_get_state(3).
|
https://dashdash.io/3/malloc_set_state
|
CC-MAIN-2022-27
|
refinedweb
| 220
| 63.9
|
syntax described here is used to define the set of valid values for CSS properties. A property value can have one or more components.
2.1. Component value types
Component value types are designated in several ways:
- keyword values (such as auto, disc, etc.), which appear literally, without quotes (e.g.
auto)
- basic data types, which appear between < and > (e.g., <length>, <percentage>, etc.).
- types that have the same range of values as a property bearing the same name (e.g., <‘border-width’>, <‘background-attachment’>, etc.). In this case, the type name is the property name (complete with quotes) between the brackets. Such a type does not include CSS-wide keywords such as inherit.
- non-terminals that do not share the same name as a property. In this case, the non-terminal name appears between < and >, as in <spacing-limit>. Notice the distinction between <border-width> and <‘bordermittable whitespace-orderand 2em.
2.6. Property value examples
Below are some examples of properties with their corresponding value definition fields
3. Textual Data Types
An identifier is a sequence of characters conforming to the <ident-token> grammar. [CSS3SYN] Identifiers cannot be quoted; otherwise they would be interpreted as a string.
A URL is a pointer to a resource and is a functional notation denoted by <url>. The syntax of a <url> is:
<url> = url( <string> <url-modifier>* )
body { background: url("") }
In addition to the syntax defined above, a <url> can sometimes be written in other ways:
For legacy reasons, a <url> can be written without quotation marks around the URL itself. This syntax is specially-parsed, and produces a <url-token> rather than a function syntactically. [CSS3SYN]
Some CSS contexts, such as @import, allow a <url> to be represented by a <string> instead. This behaves identically to writing a url() function containing that string.
Because these alternate ways of writing a <url> are not functional notations, they cannot accept any <url-modifier>s.
Note: The special parsing rules for the legacy quotation mark-less <url> syntax means that parentheses, whitespace characters, single quotes (') and double quotes (") appearing in a URL must be escaped with a backslash, e.g. url(open\(parens), url(close\)parens). Depending on the type of URL, it might also be possible to write these characters as URL-escapes (e.g. url(open%28parens) or url(close%29parens)) as described in [URL]. (If written as a normal function containing a string, ordinary string escaping rules apply; only newlines and the character used to quote the string need to be escaped.)>.
3.4.2. URL Modifiers
The url() function supports specifying additional <url-modifier>s, which change the meaning or the interpretation of the URL somehow. A <url-modifier> is either an <ident> or a function.
This specification does not define any <url-modifier>s, but other specs may do so.
4. Numeric Data Types.
4.1. Integers: the <integer> type
Integer values are denoted by <integer>..2. Real Numbers: the <number> type
Number values are denoted by <number>..3. Percentages: the <percentage> type
A percentage value is denoted by <percentage>, and consists of a <number> immediately followed by a percent sign %. It corresponds to the <percentage-token> production in the CSS Syntax Module [CSS3SYN].).
4.4. Numbers with Units: dimensions
A dimension is a <number> immediately followed by a unit identifier. It corresponds to the <dimension-token> production in the CSS Syntax Module [CSS3SYN]. Like keywords, unit identifiers are ASCII case-insensitive.
CSS uses dimensions to specify distances (<length>), durations (<time>), frequencies (<frequency>), resolutions (<resolution>), and other quantities.
5. Distance Units: the <length> type
Lengths refer to distance measurements and are denoted by <length> in the property definitions. A length is a dimension. However, for zero lengths the unit identifier is optional (i.e. can be syntactically represented as the <number> 0).
Aside from rem (which refers to the font-size of the root element), the font-relative lengths refer to the font metrics of the element on which they are used. The exception is when they occur in the value of the font-size property itself, in which case they refer to the computed font metrics of the parent element (or the computed font metrics corresponding to the initial values of the font property, if the element has no parent).
- em unit
- Equal to the computed value of the font-size property of the element on which it is used.
- ex unit
- Equal to the used x-height of the first available font. [CSS3-FONTS] must be assumed.
- ch unit
- Equal to the used advance measure of the "0" (ZERO, U+0030) glyph found in the font used to render it. In the cases where it is impossible or impractical to determine the measure of the "0" glyph, a value of 0.5em must be assumed.
- rem unit
- Equal to the computed value of font-size on the root element. When specified on the font-size property of the root element, the rem units refer to the property’s initial value.
5.1.2. Viewport-percentage lengths: the vw, vh, vmin, vmax units
The viewport-percentage lengths are relative to the size of the initial containing block. When the height or width of the initial containing block is changed, they are scaled accordingly. However, when the value of overflow on the root element is auto, any scroll bars are assumed not to exist. Note that the initial containing block’s size is affected by the presence of scrollbars on the viewport. */
For a CSS device, these dimensions are anchored either
- : Note that if the anchor unit is the pixel unit, the physical units might not match their physical measurements. Alternatively if the anchor unit is a physical unit, the pixel unit might not map to a whole number of device pixels. mm high.](pixel1.png)).

6. Other Quantities
6.1. Angle Units: the <angle> type and deg, grad, rad, turn units
Angle values are dimensions.
When an angle denotes a direction, it must always be interpreted as a bearing angle, where 0deg is "up" or "north" on the screen, and larger angles are more clockwise (so 90deg is "right" or "east").
For example, in the linear-gradient() function, the <angle> that determines the direction of the gradient is interpreted as a bearing angle.
6.2. Duration Units: the <time> type and s, ms units
Time values are dimensions denoted by <time>. The time unit identifiers are:
- s
- Seconds.
- ms
- Milliseconds. There are 1000 milliseconds in a second.].
Note:.
7.3. 2D Positioning: the <position> type
The <position> data type is defined in [CSS3BG]. UAs that support CSS Backgrounds & Borders Level 3 or its successor must interpret <position> as defined therein.
8. Functional Notations, whitespace. Components of a calc() expression can be literal values, attr() or calc() expressions, or <percentage> values that resolve to one of the preceding types.
:root { font-size: calc(100vw / 40); }
If the rest of the design is specified using the rem unit, the entire layout will scale to match the viewport width.
.foo { background: url(top.png), url(bottom.png); background-repeat: no-repeat; background-position: calc(50% + 20px) calc(50% + 20px), 50% 50%; }
.foo { background-image: linear-gradient(to right, silver, white 50px, white calc(100% - 50px), silver); }
8.1.1. Syntax
The syntax of a calc() function is:
<calc()> = calc( <calc-sum> ) <calc-sum> = <calc-product> [ [ '+' | '-' ] <calc-product> ]* <calc-product> = <calc-value> [ '*' <calc-value> | '/' <number> ]* <calc-value> = <number> | <dimension> | <percentage> | ( <calc-sum> )
Where a <dimension> is a dimension.
In addition, whitespace is required on both sides of the + and - operators. (The * and / operaters can be used without whitespace: Note that algebraic simplifications do not affect the validity of the calc() expression or its resolved type. For example, calc(5px - 5px + 10s) or calc(0 * 5px + 10s) are both invalid due to the attempt to add a length and a time.
8.1.3. Computed Value
The computed value of a calc() expression is the expression with all components computed, with all multiplication/division subexpressions resolved, and with all addition/subtraction subexpressions resolved where both sides are the same type.
Where percentages are not resolved at computed-value time, they are not resolved in calc() expressions, e.g. calc(100% - 100% + 1em) resolves to calc(0% + 1em), not to calc(1em). If there are special rules for computing percentages in a value (e.g. the height property), they apply whenever a calc() expression contains percentages.
Note: Thus, the computed value of a calc() expression can be represented as either a number or a tuple of a dimension and a percentage.
.foo { width: 200px; background-image: url(bg.png); background-position: left 50%; /* different than: */ background-position: left 100px; /* despite 50% of 200px being 100px */ }
Due to this, background-position preserves the percentage in a calc() rather than resolving it directly into a length, so that an expression like background-position: left calc(50% + 20px) properly centers the background and then shifts it 20px to the right, rather than placing its left edge 20px off of center.
The value resulting from an expression must be clamped to the range allowed in the target context.
Note: Note this requires all contexts accepting calc() to define their allowable values as a closed (not open) interval.
width: calc(5px - 10px); width: 0px;
8.2. Toggling Between Values: toggle()
The toggle() expression allows descendant elements to cycle over a list of values instead of inheriting the same value.
<em>elements italic in general, but makes them normal if they’re inside something that’s italic:
em { font-style: toggle(italic, normal); }
ul { list-style-type: toggle(disc, circle, square, box); }
The syntax of the toggle() expression is:
toggle( <toggle-value># )
where <toggle-value> is any CSS value that is valid where the expression is placed, and that doesn’t contain any top-level commas. If any of the values inside are not valid, then the entire toggle() expression is invalid. The toggle() expression may be used as the value of any property, but must be the only component in that property’s value.
The toggle() notation is not allowed to be nested; nor may it contain attr() or calc() notations. Declarations containing such constructs are invalid.
background-position: 10px toggle(50px, 100px); /* toggle() must be the sole value of the property */ list-style-type: toggle(disc, 50px); /* 50px isn’t a valid value of 'list-style-type' */
To determine the computed value of toggle(), first evaluate each argument as if it were the sole value of the property in which toggle() is placed to determine the computed value that each represents, called Cn for the n-th argument to toggle(). Then, compare the property’s inherited value with each Cn. For the earliest Cn that matches the inherited value, the computed value of toggle() is Cn+1. If the match was the last argument in the list, or there was no match, the computed value of toggle() is the computed value that the first argument represents.
Note: Note that toggle() explicitly looks at the computed value of the parent, so it works even on non-inherited properties. This is similar to the inherit keyword, which works even on non-inherited properties.
Note: Note that the computed value of a property is an abstract set of values, not a particular serialization [CSS21], so comparison between computed values should always be unambiguous and have the expected result. For example, a Level 2 background-position computed value is just two offsets, each represented as an absolute length or a percentage, so the declarations background-position: top center and background-position: 50% 0% produce identical computed values. If the "Computed Value" line of a property definition seems to define something ambiguous or overly strict, please provide feedback so we can fix it.
If toggle() is used on a shorthand property, it sets each of its longhands to a toggle() value with arguments corresponding to what the longhand would have recieved had each of the original toggle() arguments been the sole value of the shorthand.
margin: toggle(1px 2px, 3px 4px, 1px 5px);
is equivalent to the following longhand declarations:
margin-top: toggle(1px, 3px, 1px); margin-right: toggle(2px, 4px, 5px); margin-bottom: toggle(1px, 3px, 1px); margin-left: toggle(2px, 4px, 5px);
Note that, since 1px appears twice in the top and bottom margins, they will cycle between only two value while the left and right margins cycle through three. In other words, the declarations above will yield the same computed values as the longhand declarations below:
margin-top: toggle(1px, 3px); margin-right: toggle(2px, 4px, 5px); margin-bottom: toggle(1px, 3px); margin-left: toggle(2px, 4px, 5px);
8.3. Attribute References:, string is implied.
The optional <attr: Note that, unlike <toggle-value>s, an attr() <attr-fallback> value may contain top-level commas, as it is always the last argument in the functional notation.
The attr() expression is only valid if:
- the attr() expression’s type is valid where the attr() expression is placed,
- if the attribute name is given with a namespace prefix, the prefix "\33" will produce a string containing those three characters, not a string containing "3" (the character that the escape would evaluate to).
- color
- The attribute value must parse as a HASH or IDENT CSS token, and be successfully interpreted as a <color>. The default is currentcolor.
- url
- The attribute value is taken as the contents of a CSS <string>. It is interpreted as a quoted string within the url() notation. The default is about:invalid, which is a URL
- The attribute value must parse as a NUMBER CSS CSS CSS token, and be successfully interpreted as the specified type..
- em
- ex
- px
- rem
- vw
- vh
- vmin
- vmax
- mm
- cm
- in
- pt
- pc
- deg
- grad
- rad
- ms
- s
- Hz
- kHz
- %
- The attribute value must parse as a NUMBER CSS token, and is interpreted as a dimension with the specified unit..
].
Acknowledgments.
Changes-order:
|
http://www.w3.org/TR/css-values/
|
CC-MAIN-2016-40
|
refinedweb
| 2,322
| 52.49
|
Important: Please read the Qt Code of Conduct -
How to divide a project? (For readability)
My QMainWindow project has more than 2000 lines code and it's getting bigger day by day. But this situation causes complexity and prevents readability. What should i do for this case. How can i make my project more readable. I need suggestions from specialists.
Did you put everything in your MainWindow.cpp / main.cpp? So you have 2000 lines in one file?!
2000+ LOC are not a problem if it is necessary, but normally you should avoid that and separate your functions from, for example, your class headers, divide your program into classes, which are given specific tasks (if you haven't done it already) and bring structure to your project.
It helps, if you write down first, what you are planning to do and which classes, variables and connections you need to realize that.
(In text form or create an UML class diagram)
Did you understand the basics of object-oriented programming?
What kind of program do you work on? Are you new to programming in general or did I get your question wrong? If I did, I'm sorry for that :)
@Pl45m4 You got my question right. I have some seperate classes like chart class and clickable label or qthread classes. Totally i have over 4000 lines. Like i said i have over 2000 lines in mainwindow.cpp. It's simple desktop project that shows camera feeds, some database stuff, chart, logging , other windows etc.
Is it possible to write member functions in another file. like:
void MainWindow::someMemberButtonClickedFunction() { // all my these types of member functions are in the mainwindow.cpp file }
Is it a good practice to seperate them into another file, is this possible?
By the way, i am not well experienced at programming. I am currently trying some languages and trying to do projects with them. I liked C++ and i want to improve myself with C++ and GUI.
So your 2000 loc are mainly event handler functions for buttonClick events or text inputs?
Maybe you can find a way to split your MainWindow class to outsource some tasks to their own class. This maybe saves a few hundred lines.
Or you take all your slot / eventhandler functions and put them into an extra file, "mainwindow_slots.cpp" or something.
I wouldn't recommend that, because of 2 things:
First, there is a reason why these slot functions are in the mainwindow.cpp. If you separate them, you have to tell your linker, where your slots are and this can cause more trouble and confusion.
Second, if the main part of the 2000 lines are slots / events, you just move the problem to another file ;)
Better try to thin your code wherever you can.
In addition, you can hide blocks of your file by clicking on the down-arrow next to the line number (Qt creates those blocks automatically).
In C# it is possible to define #regions of code, which you can name as you want and set the end of one region wherever you want.
You can close those regions afterwards and save some lines (it just hides the code).
Dont know if this is possible in Qt / C++ as well.
@Pl45m4 That's what i want to hear: advices. Thank you for any of them.
I think that's a good point for programming, especially for medium scaled and above projects. If anyone write some advices, suggestions just under here it will be good for later readers. I think i can give an advice too :)
- Don't ever never start a project before you designed the project with all its details and keep it flexible for later changes.
This is the lesson that i extracted from my experience on this project.
ok, I just found out, that the code management comes with your IDE. The #region tag only works in Visual Studio :(
What you could do, is redeclaring your namespace. You can hide all (or at least some) functions, you do not have to work on anymore.
In case of style, it is not the best solution but it can help in your case, keeping your code readable (for you).
foo.h
namespace bar { class foo { public: foo(); ~foo(); } }
foo.cpp
// includes.... namespace bar { // Constructor, Destructor } namespace bar { // Stuff you want to hide } //
It's not a good solution, but it can help.
I think that's a good point for programming, especially for medium scaled and above projects.
Yeah, in some cases it helps a lot, if you write something down first or create an UML diagram.
I had some projects for school / university, where I thought, I dont need any plan because I'm gonna use only 1 or 2 classes anyway.
After writing headerfile no. 7, I wished I had made a diagram before ("Well, that escalated quickly") ;-)
Seperation of Concerns (SoC):
There are design approaches that go further, but SoC is a good place to start.
|
https://forum.qt.io/topic/98087/how-to-divide-a-project-for-readability/7
|
CC-MAIN-2020-34
|
refinedweb
| 832
| 72.87
|
Mozilla C++ portability guide
I'm up to Chapter 18 of Stroustrup's The C++ Programming Language. The templates chapter was painful and I had to read it twice. I think I got it the second time through. Stroustrup talks about how people ask him how long it takes to learn C++ and he says (paraphrased) "a year or two probably; be happy, it's not as long as it takes to learn a spoken language or a musical instrument". It's still frustrating. The syntax and whatnot are so easy to learn, but the idioms and common practices take forever to ingrain.
Today in my somewhat futile half-hearted attempt to learn autotools, I chanced upon the Mozilla C++ portability guide It includes such advice as:
- Don't use templates.
- Don't use exceptions.
- Don't use namespaces.
- Don't use the C++ standard library, not even iostream.
- Don't put assignments in if statements.
- Use macros.
This is interesting, since it's the exact opposite of what Stroustrup writes. And Stroustrup also says he wrote this book in such a way as to demonstrate "standard" portable code. I guess I don't doubt that the Mozilla guys know what they're talking about, but if that's the extent you have to bend in order to write "portable" code, I hope I don't ever have to.
3 Comments
I happened to work at Netscape at the time this document was produced. Many of the items are dated; I don't, for example, agree today that avoid RTTI is a reasonable guideline.
I'm currently preparing a book on C++ portability that will be released by Prentice Hall, based on my experiences at Netscape. Of the items mentioned by Mozilla in that document, when it comes down to it, the only ones I mention are 1) take care regarding the sign of char types 2) use of NSPR (using an abstraction library is definitely a good thing and 3) fix your compiler warnings. Beyond that, I think anything that Stroustrup claims is safe (especially if portability means working with Visual C++, and GCC on Mac and Linux -- which is the case for a great many people).
After reading Stroustrup, I'd move on to the Effective C++ book by Scott Meyers. Read that book back to front, and then do it again twice more. Use Stroustrup as a reference, but read (and practice) what Meyers has to say until it is second nature. You'll be a better C++ programmer for it.
Awesome. Thanks for the advice and clarification.
I will be using new C++ compilers, which will be supporting RTTI. Is there still any problem in using RTTI(dynamic_cast all those things)?
|
http://briancarper.net/blog/222.html
|
CC-MAIN-2017-26
|
refinedweb
| 455
| 71.44
|
If Mohamed El-Erian were any busier, he wouldn't get any sleep. The 43-year-old portfolio manager of PIMCO Emerging Markets Bond Fund wakes up at 3:15 every morning to walk his boxer pup, Ch?. That's also when he calls PIMCO's London office to check on the 17 markets he tracks. By 4:30 a.m., he is at his Newport Beach (Calif.) office, writing a summary of the night's events for PIMCO's 200-member debt analysis team. With their feedback, he starts trading at 6 a.m., continuing nonstop till 4 p.m.
Such diligence would have been pointless a few years ago when emerging markets moved in tandem. If Chinese bonds fell, Mexican ones would too, even if Mexico's economy was fine. But the past three years have seen a "decoupling" of emerging markets, so investors now see them as separate entities. After the Russian debt crisis of 1998, the International Monetary Fund began publishing detailed credit reports on each market, and countries realized the importance of communicating with creditors. "With more access to information and analysis, investors can differentiate better between countries," says El-Erian. That's a skill he honed in the 14 years he worked at the IMF where "you get to see how policymakers in these countries think and what measures are at their disposal." El-Erian is a New York-born economist who attended Oxford and Cambridge on scholarship.
The decoupling trend was sharply evident in 2001. Argentine bonds, which had a 22% weighting in the J.P. Morgan Emerging Market Bond Index, plummeted 66.9%, yet the 16 other emerging markets that the index tracks all rose--from a 5.5% gain for Venezuela to 55.8% in Russia.
The wide variations allowed El-Erian, who actively shifts between countries, to stand out. His fund rose 27.6% in 2001, more than double the average emerging-market bond fund's 10.6% total return. And its 25.8% three-year annualized return (as of Jan. 21) beats any other retail bond fund. Besides the $135 million fund, he runs $6.4 billion for institutional clients. Lately, he's been buying Panama's bonds, which he sees as relatively safe and cheap, and selling Russia's, which may be vulnerable to a decline in oil prices.
El-Erian sees emerging markets in three tiers. First-tier countries, such as Mexico and Poland, have learned from the currency crises of the 1990s. Anchored economically to more industrialized partners, such as the U.S. and Germany, they've improved their balance sheets and raised their foreign currency reserves. El-Erian invests most of his portfolio in this tier. The bonds tend to have higher credit ratings and lower yields than those of other markets. Mexican sovereigns, for instance, yield 7.5% and are rated BBB, vs. the 11.5% yield and BB- rating for the benchmark. "Mexico has accumulated significant international currency reserves, and it has pre-financed its debt for 2002," he says.
At the other end of the spectrum are third-tier markets such as Argentina, which in January defaulted on its debt. El-Erian anticipated that 18 months ago and sold the bonds. "Argentina established a currency pegged to the U.S. dollar without the willingness to accept the discipline that comes with that," he says. Others in the tier include Ukraine and Ecuador, whose bonds now yield 14% and 16%, respectively.
Between the first and third tiers are such nations as Brazil and the Philippines. Their bonds have high yields but are less volatile than the third tiers. Brazilian sovereigns, for instance, yield over 13% because of fears of contagion from Argentina, Brazil's biggest trading partner. El-Erian thinks these worries are largely unfounded, so he holds a 25% Brazil position, a neutral weighting relative to his benchmark. Yet he realizes that conditions could change overnight. That's why he walks his dog hours before the sun comes up. By Lewis Braham
|
http://www.bloomberg.com/bw/stories/2002-02-03/bond-funds-many-worlds-to-watch
|
CC-MAIN-2015-35
|
refinedweb
| 670
| 68.06
|
Hi,
I have some functions which will be call in some pages. I don't want to define the same functions in every page as it will be hard if I need change the code.
I've already use a class to hold the common functions and use that class in the page if I need those function.
But the problem is:
Some methods in the aspx.cs page are not available in the .cs class. Such as "Session", "Responce.Redirect","Page".
Let's say I have a function which check the session value and redirect the user.
string pId = Session["ProjectID"] as string ;
if (pId != null)
{
Response.Redirect("~/somepage.aspx");
}
else
{
Response.Redirect("~/default.aspx");
}
These codes get error in the .cs class but only work in aspx.cs page.
I don't know which namespace should be used in the .cs class.
Is there a way I can write this function in one place and call it in the page?
Thanks and I hope I've stated the problem clear.
View Complete Post(
Hi,
I have one updatepanel1 on my page and i trigger a javascript function on completion using the function below:
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(jscriptFunction);
Now I have to add one more - udpatepanel2 in the same page, for which a different javascript function has to be triggered on completion..
Is it possible? Please share your ideas...
I have an ASPX page which hosts a Custom Control. That custom control needs to call an ASMX web service. I would use a PageMethod, if the fact that I'm calling from within a Custom Control didn't preclude that.
I'm looking for some guidance on the most effective way to try and lock down the ASMX call such that it will only succeed for my users calling it from my ASPX page - and not for anyone trying to call it from elsewhere. I've seen various ideas around session variables and so on.
What's the expert view, please?
I have found recently with some machines or some user accounts that calling some VB6 code from .Net fails to execute properly.
I have two examples:
when i call modal popup extender the background is scrolling , i want to freeze the background page then is there any properties or any other idea to get
|
http://www.dotnetspark.com/links/45796-can-i-write-functions-can-be-called-from.aspx
|
CC-MAIN-2017-22
|
refinedweb
| 388
| 83.25
|
Read SEED microbarometer data and plot.
Project description
Microbarometer read/plot from Python
Easy Python program for reading and plotting SEED microbarometer data. Uses ObsPy.
Install
python -m pip install -e .
If you have trouble installing PROJ.4, try
conda install cartopy
Usage
import microbarometer as mb data = mb.load('myfile.asc')
Data format
ObsPy is mostly for MiniSEED, and wasn't able to read the particular
.seed files we had.
An alternative method is to convert SEED to SAC format:
- Download rdseed utility. The executable
rdseed.rh6.linux_64worked for me on Ubuntu as well.
- extract executable. For Ubuntu:
tar xf rdseedv*.tar.gz rdseedv*/rdseed.rh6.linux_64 mv rdseedv*/rdseed* rdseed
- convert SEED to SAC, for example to write SAC ASCII (readable by Pandas)
../rdseed -f myfile.seed -o 6 -d
Other output formats are possible via the
-o option.
Notes
To get help:
./rdseed -h
channel BDO is "Bottom Pressure" in Pascals.
- Convert SEED to SAC (readable by ObsPy, MatSAC, etc.):
./rdseed -f myfile.seed -d
- List time range of data with
-toption:
./rdseed -f myfile.seed -t
- Read SEED headers (extensive metadata):
./rdseed -f myfile.seed -s
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/microbarometer/
|
CC-MAIN-2020-05
|
refinedweb
| 217
| 54.59
|
Introduction.
Specification
- Working voltage: 3.3~6V
- Current consumption @2.5v: 40uA / working mode, 0.1uA / standby mode
- Communication interface: I2C / SPI (3 or4 lines)
- Size: 20x15mm
Application
- Tap/Double Tap Detection
- Free-Fall Detection
- Selecting Portrait and Landscape Modes
- Tilt sensing
Connection Diagram
This diagram is an IIC connection method suitable with Arduino UNO. It would be differen if you use other Arduino Controllers which the SCL & SDA pin might be different. And if you want to use SPI interface, please refer to ADXL345 datasheet for more info.
Sample Code
Upload the sample sketch bellow to UNO or your board to check the 3-axis acceleration data and the module's tilt information.
For Arduino
int regAddress = 0x32; //first axis-acceleration-data register on the ADXL345 int x, y, z; //three axis acceleration data double roll = 0.00, pitch = 0.00; //Roll & Pitch are the angles which rotate by the axis X and y //in the sequence of R(x-y-z),more info visit //); } void loop() { Serial.print("The acceleration info of x, y, z are:"); sprintf(str, "%d %d %d", x, y, z); Serial.print(str); Serial.write(10); //Roll & Pitch calculate RP_calculate(); Serial.print("Roll:"); Serial.println( roll ); Serial.print("Pitch:"); Serial.println( pitch );, } //calculate the Roll&Pitch void RP_calculate(){ double x_Buff = float(x); double y_Buff = float(y); double z_Buff = float(z); roll = atan2(y_Buff , z_Buff) * 57.3; pitch = atan2((- x_Buff) , sqrt(y_Buff * y_Buff + z_Buff * z_Buff)) * 57.3; }
For Micropython
from machine import Pin,I2C import ADXL345 import time i2c = I2C(scl=Pin(22),sda=Pin(21), freq=10000) adx = ADXL345.ADXL345(i2c) while True: x=adx.xValue y=adx.yValue z=adx.zValue print('The acceleration info of x, y, z are:%d,%d,%d'%(x,y,z)) roll,pitch = adx.RP_calculate(x,y,z) print('roll=',roll) print('pitch=',pitch) time.sleep_ms(50)
By the way, we have collected some useful 3-axis data processing methods: How to Use a Three-Axis Accelerometer for Tilt Sensing.
Result
Open the Serial monitor to see the 3-axis acceleration data and Roll-Pitch angle. See changs as you sway the Accelerometer.
More Documents
Get Triple Axis Accelerometer Breakout - ADXL345 (SKU:SEN0032) from DFRobot Store or DFRobot Distributor.
Turn to the Top
|
https://wiki.dfrobot.com/Triple_Axis_Accelerometer_Breakout_-_ADXL345__SKU_SEN0032_
|
CC-MAIN-2020-29
|
refinedweb
| 375
| 50.33
|
Node:Function prototyping, Next:The exit function, Previous:Functions with values, Up:Functions
Function prototyping
Functions do not have to return integer values, as in the above examples, but can return almost any type of value, including floating point and character values. (See Variables and declarations, for more information on variable types.)
A function must be declared to return a certain variable type (such as an integer), just as variables must be. (See Variables and declarations, for more information about variable types.) To write code in good C style, you should declare what type of value a function returns (and what type of parameters it accepts) in two places:
- At the beginning of the program, in global scope. (See Scope.)
- In the definition of the function itself.
Function declarations at the beginning of a program are called prototypes.
Here is an example of a program in which prototypes are used:
#include <stdio.h> void print_stuff (int foo, int bar); int calc_value (int bas, int quux); void print_stuff (int foo, int bar) { int var_to_print; var_to_print = calc_value (foo, bar); printf ("var_to_print = %d\n", var_to_print); } int calc_value (int bas, int quux) { return bas * quux; } int main() { print_stuff (23, 5); exit (0); }
The above program will print the text
var_to_print = 115 and then
quit.
Prototypes may seem to be a nuisance, but they overcome a problem intrinsic to compilers, which is that they compile functions as they come upon them. Without function prototypes, you usually cannot write code that calls a function before the function itself is defined in the program. If you place prototypes for your functions in a header file, however, you can call the functions from any source code file that includes the header. This is one reason C is considered to be such a flexible programming language.
Some compilers avoid the use of prototypes by making a first pass just to see what functions are there, and a second pass to do the work, but this takes about twice as long. Programmers already hate the time compilers take, and do not want to use compilers that make unnecessary passes on their source code, making prototypes a necessity. Also, prototypes enable the C compiler to do more rigorous error checking, and that saves an enormous amount of time and grief.
|
http://crasseux.com/books/ctutorial/Function-prototyping.html
|
CC-MAIN-2017-43
|
refinedweb
| 379
| 56.29
|
hi,
i have a very singular problem:
i can't get my class 'A' with 48 bytes size. 44 bytes ok, 52 bytes ok. But if it takes 48 bytes then my program crashs. It dont send any error or advice.
any idea why?
I'm using SDL library on Linux (Ubuntu).
ps.: Sorry my weak english.
This isnt a C++ problem. It could be alignment issues, unix OS like to align data in certain ways and you can break things around this issue, if careless with a pointer or something, but the bottom line is its not your classes size thats the problem, its very likely a bug that only shows up when the class is a certain size.
thanks for helping.
where can i get some reference about this issue?
again, thanks a lot.
Please post your code here.
#ifndef _BASEGAME_
#define _BASEGAME_
#include "SDL.h"
#include "keyBoard.h"
#include <string>
using std::string;
class BaseGame{
public:
static int key[nKeys];
BaseGame(string title, int screenWidth, int screenHeight);
virtual ~BaseGame();
void start();
protected:
bool quitGame;
const int screenWidth;
const int screenHeight;
string title;
SDL_Surface* screen;
private:
void HandleEvent(const SDL_Event& event);
void init();
virtual void render() = 0;
virtual void logicLoop() = 0;
virtual void eventsTreat() = 0;
BaseGame(const BaseGame&);
BaseGame& operator=(const BaseGame&);
};
#endif
#ifndef _GAME_
#define _GAME_
#include "BaseGame.h"
class Level;
class LevelManager;
class Border;
class DrawBall;
class DrawBandeja;
class Points;
class Game: public BaseGame{
public:
Game();
virtual ~Game();
private:
virtual void logicLoop();
virtual void render();
virtual void eventsTreat();
virtual void setReferencesUp();
LevelManager* levelManeger;
Level* currentLevel;
DrawBall* ball;
DrawBandeja* bandeja;
Points* points;
Border* border;
private:
Game(const Game&);
Game& operator=(const Game&);
};
#endif
well, in Game class, if it is exactly 48 bytes then it appears a window and then exits with no error, but without do what i want. Otherwise, if it's not then it works like i want.
my makefile:
CC = g++
CFLAGS = -Wall `sdl-config --cflags --libs`
PROG = Game
PRE = rm -f $(PROG)
#-------------------------------------------------------------------------------
#BLOCK
PATH_BLOCK = Block/
IBLOCK = -I./$(PATH_BLOCK)
BLOCK = $(wildcard $(PATH_BLOCK)*.cpp)
#COMMONBLOCK
PATH_COMMON_BLOCK = $(PATH_BLOCK)CommonBlock/
ICOMMON_BLOCK = -I./$(PATH_COMMON_BLOCK)
COMMON_BLOCK = $(wildcard $(PATH_COMMON_BLOCK)*.cpp)
#ITEMBLOCK
PATH_ITEM_BLOCK = $(PATH_BLOCK)ItemBlock/
IITEM_BLOCK = -I./$(PATH_ITEM_BLOCK)
ITEM_BLOCK = $(wildcard $(PATH_ITEM_BLOCK)*.cpp)
#TODOS BLOCKS
BLOCKS = $(BLOCK) $(COMMON_BLOCK) $(ITEM_BLOCK)
IBLOCKS = $(IBLOCK) $(ICOMMON_BLOCK) $(IITEM_BLOCK)
#-------------------------------------------------------------------------------
#BALL
PATH_BALL = Ball/
IBALL = -I./$(PATH_BALL)
BALL = $(wildcard $(PATH_BALL)*.cpp)
#-------------------------------------------------------------------------------
#ITEM
PATH_ITEM = Item/
IITEM = -I./$(PATH_ITEM)
ITEM = $(wildcard $(PATH_ITEM)*.cpp)
#-------------------------------------------------------------------------------
#BANDEJA
PATH_BANDEJA = Bandeja/
IBANDEJA = -I./$(PATH_BANDEJA)
BANDEJA = $(wildcard $(PATH_BANDEJA)*.cpp)
#-------------------------------------------------------------------------------
#LEVELS E MAPAS
PATH_LEVEL = Levels/
ILEVEL = -I./$(PATH_LEVEL)
LEVEL = $(wildcard $(PATH_LEVEL)*.cpp)
PATH_MAPS = $(PATH_LEVEL)Maps/
IMAPS = -I./$(PATH_MAPS)
MAPS = $(wildcard $(PATH_MAPS)*.cpp)
#-------------------------------------------------------------------------------
#Todos sources
SRCS = $(wildcard *.cpp) $(BLOCKS) $(BALL) $(ITEM) $(BANDEJA) $(LEVEL) $(MAPS)
INCLUDE = -I. $(IBLOCKS) $(IBALL) $(IITEM) $(IBANDEJA) $(ILEVEL) $(IMAPS)
#-------------------------------------------------------------------------------
#REGRAS
all: $(PROG)
$(PROG): $(SRCS)
$(PRE) && $(CC) $(INCLUDE) $(CFLAGS) -o $(PROG) $(SRCS)
clean:
rm -f $(PROG)
I dont see anything there, so its time to debug it. If you dont already know what to do, and it sounds like you do not, try this:
1) get it back to error condition. Make the class 48 bytes and get it back into the state where the error happens.
2) Set breakpoints in your IDE and try to find a place in the code where the program does something you did not expect.
3) Narrow down the problem. Keep an open mind, but its very likely you have either totally messed up a pointer activity or you have somehow violated the alignment rules of your compiler and OS. I do not know the rules, but in a general sense unix does something like padding (extra bytes that are zeros) to align all the data to the machine's or compiler's or operating system's "word size". Word size used to mean the size of a register on the CPU, for example older intel computers had 16 bit registers, later those were made to be 32, and now they are 64 bits long. However, while 64 may be the CPU and "true" word size, the compiler or OS may think the size is 32. The bottom line is even on a 64 bit system it may be aligned to 32 bits, or 16 bits, or 48 bits, or some other awful thing. And this is controllable by compiler flags too, so you can change this alignment! Anyway, the most common problem with alignment is using a pointer poorly, so you are right back to looking for pointer problems....
This has nothing to do with the class's size. Most likely, you program has some form of undefined behavior to to pointer allocation/deallocation bugs, which means that its behavior is erratic and predictable. Don't bother modifying the class' size; look for a pointer related bug in your code.
Danny Kalev
View Tag Cloud
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?173081-c-class-limit-size&p=522782
|
CC-MAIN-2017-09
|
refinedweb
| 803
| 65.01
|
Decorator for weak and lazy attributes
Provides a decorator class for weak and lazy references..
List of objects:
Running this module from the command line will execute the doctests. To enable verbose mode, run:
python weak_and_lazy.py -v
Decorator class for weak and lazy references.
Decorator class for a property that looks like a reference to outsiders but is in fact a dynamic object-loader. After loading it stores a weak reference to the object so it can be remembered until it gets destroyed.
This means that the referenced object
You probably do not need it, if you are asking this.
Still, for what in the world might that be useful?
Suppose you program a video game with several levels. As the levels have very intense memory requirements, you want to load only those into memory which you actually need at the moment. If a level is not needed anymore (every player left the level), the garbage collector should be able to tear it into the abyss. And while fulfilling these requirements the interface should still feel like you are using normal references. That is for what you can use weak-and-lazy.
Define a Level class with VERY intense memory requirements:
class Level(object): def __init__(self, id, prev=None, next=None): print("Loaded level: %s" % id) self.id = id self.prev_level = ref(prev, self.id - 1) self.next_level = ref(next, self.id + 1) @weak_and_lazy def next_level(self, id): '''The next level!''' return Level(id, prev=self) # alternative syntax: prev_level = weak_and_lazy(lambda self,id: Level(id, next=self))
Besides the tremendous memory requirements of any individual level it is impossible to load ‘all’ levels, since these are fundamentally infinite in number.
So let’s load some levels:
>>> first = Level(1) Loaded level: 1 >>> second = first.next_level Loaded level: 2 >>> third = second.next_level Loaded level: 3 >>> assert third.prev_level is second
Hey, it works! Notice that the second level is loaded only once? Can it be garbage collected even if the first and third stay alive?
second_weak = weakref.ref(second) assert second_weak() is not None second = None import gc c=gc.collect() assert second_weak() is None
Reload it into memory. As you can see, as long as second is in use it will not be loaded again.
>>> c=gc.collect() >>> second = first.next_level Loaded level: 2 >>> assert first.next_level is second
What about that sexy docstring of yours?
assert Level.next_level.__doc__ == '''The next level!'''
Let’s go even further!
>>> third.prev_level is second Loaded level: 2 False >>> second.next_level is third Loaded level: 3 False
Oups! One step too far… Be careful, this is something that your loader must to take care of. You can customly assign the references in order to connect your object graph:
third.prev_level = weakref.ref(second) second.next_level = weakref.ref(third) assert third.prev_level is second and second.next_level is third
Data class for a @weak_and_lazy reference instance.
This class is Used to bind a reference and loader parameters to your lazy reference. It is implemented as a class and not as a tuple or dictionary for mainly one reason:
I know these were at least four reasons, but only the second one was really important. (JK)
The following methods are overloaded:
Due to the use of __slots__ instances of this class are very restricted. See also:
Defined instance attributes:
The class is defined in global scope because this seems to be a requirement for picklable classes.
NOTE: some builtins are not weak-refable and can therefore not be used with this class:
>>> ref(dict()) # doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): TypeError: cannot create weak reference to 'dict' object
For such cases you could use trivial inheritances:
class Dict(dict): pass
NOTE: ref expects a hard reference in its constructor but stores only a weak reference:
d = Dict(Foo="Bar") r = ref(d) assert r.ref() is not None del d import gc c=gc.collect() assert r.ref() is None
Objects of this class are picklable:
import pickle r = ref(Dict(Foo="Bar"), 1, 2, 3, Bar="Foo") p = pickle.loads(pickle.dumps(r)) assert p.args == r.args and p.kwargs == r.kwargs
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/weak-and-lazy/
|
CC-MAIN-2017-30
|
refinedweb
| 711
| 59.19
|
This.
1) The Add() method silently does nothing if the list is in ReadOnly mode. If you think about it this is really conceptually equivalent to writing the following:
public void Add(object 0) {
try {
if (_readOnly) {
throw new ReadOnlyException("Can't Add to ReadOnly List.");
}
...
} catch (ReadOnlyException) {
// ignored
}
}
It would be better to create a ReadOnlyException class and throw from Add().
2) The atCapacity() method always returns true. It should be comparing _size to _elements.Length.
I also don't see any value in this trivial method. Is the following really any less readable?
if (_size >= _elements.Length) {
growBy(1);
It also might be a little easier to read IMHO if growBy() contained the check.
3) This code shouldn't use a for loop to copy array elements. Array.Copy() already exists.
4) As with atCapacity(), the addElement() method is really too trivial to warrant its own method.
5) Growing by 10 seems rather strange. The generally accepted practice is to grow by 50%.
I might do it like this:
public class MyExpandableList {
private int _size = 0;
private object[] _elements = new object[INITIAL_SIZE];
private bool _readOnly;
public void Add(object o) {
throw new ReadOnlyException("Can't add to readonly list.");
growBy(1);
_elements[_size - 1] = o;
private void growBy(int count) {
_size += count;
if (_size < _elements.Length) {
return;
object[] tmp = new object[_size + (_size / 2)];
Array.Copy(_elements, 0, tmp, 0, _elements.Length);
_elements = tmp;
public bool ReadOnly {
get {
return _readOnly;
set {
_readOnly = value;
You're right - refactoring does help you spot bugs! atCapacity() always returns true.
I spotted the at capacity bug as well. What I've noticed as I've gotten into refactoring and this composed method pattern is that I'll pick apart some code into separate methods. The new methods can usually be optimized now that they are separate and .
If the new smaller (many times one line) method is not being used in multiple places, I can put it back in the original method from which it was extracted. When I'm finished, a 30 line method has been reduced to a 10 line method with much more readable structure.
I'm a big fan of small classes and small methods, but I find a great deal of people have a hard time knowing where to start when they're trying to figure a component out and it doesn't have one humongous master class that does everything important. Has anyone come across a good way to provide a centralized starting point for a major component without resorting to code bloat? I guess I'm looking for something more substantial than a common interface pattern, the people I work with at least want to dig down into the source and see what the code is REALLY doing before they call methods on it (i.e abstraction isn't really trusted).
For instance I'm writing a pretty complex component that has 150+ classes (50,000+ LOC). (I've seen linux kernels that try to do less than this component). Even those classes are a bit too big for me. But I'm wary of refactoring farther when I know it's already too hard for someone else to figure out what all those packages are for.
Anyway, any concrete suggestions and examples would be useful. Are there any open source projects that you know of that seem pretty easy to grasp even if they are matching the complexity of an OS? I'd like to have a structure (outside of a design doc or comments) that allows a maintenance programmer the ability to grasp what each package does at a glimpse. Something self-referential so one could work their way backwards and forwards through the code without a rosetta stone.
Refactoring Example
SteveJ,
I do it with tests. By convention, I like to keep tests that demonstrate the primary responsibility(s) of a class at the top of the test class. Tests that exercise helper methods show up later in the class.
I don't really grok the idea of a component having an entry point though. That's not really part of my design work. If a component is a container for classes that are focused on the same architectural concern, how would one class in the component by more primary than another?
IMO, the best way to demonstrate what a class is for is to just demonstrate it with client code. The best way to do that in my experience is to use a client-driven design practice like test-driven development, behavior-driven design, etc.
Jeremy D. Miller has some really good articles on unit testing, design and TDD. Here are some gems: ...
Good article, as stated by a previous reply though I don't like the readonly guard statement silently returning.
Pingback from Rhonda Tipton’s WebLog Web Links 12.03.2006 «
About a year ago I hit a patch where I wasn't able to blog much (something about finding a new job
Jeremy Miller recently posted a " Best of... " compendium that included the following post: Composed
My last post, I might be an elitist, but you're a misanthrope , has kicked up a great conversation
|
http://codebetter.com/blogs/jeremy.miller/archive/2006/12/03/Composed-Method-Pattern.aspx
|
crawl-001
|
refinedweb
| 867
| 63.7
|
In the previous article we showed how to install the FANN artificial neural network library on Ubuntu. In this article we will use the library.
There are typically two parts in using artificial neural networks:
- A training part, where the neural network is trained with a training dataset. This dataset is chosen in such a way that it is representative of the real cases that it will see in the running part.
- An execution part, where the neural network is executed on a real dataset. If the neural network was trained correctly, it will now be used to give answers to input it has seen in the training dataset, but also to input it has never seen.
The example we talk about is the well-known XOR operation. Let’s say that -1 represents a false value and 1 represents a true value, then the XOR operation will give the following output based on the given input:
We will start by creating a training dataset. FANN expects a dataset to be in a specific format. The first line of this file contains three numbers. The first number tells FANN how many samples it can expect. The second number tells it how many input values there are for one sample. The third number tells FANN how many outputs there are for one sample. The rest of the file contains the samples, where the inputs are placed on one line and the corresponding output is placed on the next line. Our training dataset then looks like this:
4 2 1 -1 -1 -1 -1 1 1 1 -1 1 1 1 -1
We save this file as xor.data. Now we can write the training part:
#!/usr/bin/python import libfann connection_rate = 1 learning_rate = 0.7 num_input = 2 num_hidden = 4 num_output = 1 desired_error = 0.0001 max_iterations = 100000 iterations_between_reports = 1000 ann = libfann.neural_net() ann.create_sparse_array(connection_rate, (num_input, num_hidden, num_output)) ann.set_learning_rate(learning_rate) ann.set_activation_function_output(libfann.SIGMOID_SYMMETRIC_STEPWISE) ann.train_on_file("xor.data", max_iterations, iterations_between_reports, desired_error) ann.save("xor.net")
The neural network is saved in the file xor.net. We use this file in the execution part:
#!/usr/bin/python import libfann ann = libfann.neural_net() ann.create_from_file("xor.net") print ann.run([1, -1])
The output of this run is 1, which is the correct answer.
HendricksonSeptember 14, 2010 at 3:23pm
Thank you.
garibaldi pineda garcíaAugust 29, 2011 at 3:49am
you should probably know that notation has changed, instead of importing the library like:
import libfann
you do
from pyfann import libfann
|
https://jansipke.nl/using-fann-with-python/
|
CC-MAIN-2019-09
|
refinedweb
| 419
| 57.47
|
From: Doug Gregor (dgregor_at_[hidden])
Date: 2005-10-12 13:43:36
Hi Andreas,
On Oct 12, 2005, at 10:57 AM, Andreas Pokorny wrote:
> We just stumbled across a small problem with Boost.Signals,
> while playing with the symbol visibility features of gcc-3.4 and
> gcc-4. That feature is still buggy, but it unveiled some strange
> code.
We've stumbled into this as well.
> The code is in signal_template.hpp:
>
> template<typename Pair>
> R operator()(const Pair& slot) const {
> F* target = const_cast<F*>(any_cast<F>(&slot.second));
> return (*target)(BOOST_SIGNALS_BOUND_ARGS);
> }
>
> I dont see why any_cast is used here at all. The type safety
> is guranteed by the interface of Boost.Signals, so I doubt that
> someone will be able to abuse the library. I believe the author
> of that code had the same in mind, since he did not test
> target != 0 before invoking the target. So in my opinion a
> simple reinterpret_cast should suffice here. It would also
> fix our issue, although that one is caused by gcc.
You're right. Our options are:
1) Fix any_cast to compare typeid().name() instead of type_info
objects
2) Make an unsafe_any_cast that doesn't do the check
3) Switch signals to use void*'s
For 1.33.1, I like #2 best because it affects the least amount of code.
For 1.34.0 and beyond, #1 is probably our best bet so that others don't
run into the same problem.
Doug
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2005/10/95333.php
|
CC-MAIN-2021-21
|
refinedweb
| 267
| 69.28
|
Consider the following code:
The output looks like this:The output looks like this:Code:#include <iostream> int main() { using namespace std; cout << 'a' << endl << 'b' << endl; cout.operator <<('a'); cout << endl; cout.operator <<('b'); cout << endl; cout.operator <<(*"ab"); cout << endl; cout.operator <<("ac"); cout << endl; operator <<(cout, "ac"); cout << endl; return 0; }
a
b
97
98
97
00B57800
ac
where the '00B57800' is just some memory location. That's also the line that surprized me. I was expecting to see 'ac' there because I had thought that 'cout << "ac";' was just shorthand for 'cout.operator <<("ac");'
Since that's obviously not the case, I looked around in Stroustrup 2009 and found on p. 610:
basic_ostream& operator<<(const void* p)
So, that explains the output of a memory location, and one of the template classes further down on the same page explains why operator <<(cout, "ac"); does give the desired 'ac' output.
What I'm wondering, though, is how the compiler knows to interpret 'cout << "ac";' as 'operator <<(cout, "ac");' rather than as 'cout.operator <<("ac");'
|
https://cboard.cprogramming.com/cplusplus-programming/127167-insertion-operator.html
|
CC-MAIN-2017-09
|
refinedweb
| 177
| 63.29
|
Right now, this program asks the user to enter the input infile and outfile file names. Then it gives an error if it can't open the file. If it can open infile, it reads the first value and consider that as the array size. After that it continues with other numbers and stores them. Then it sorts them using insert sort, and get the mean, median, and standard deviation. Finally it prints the sorted numbers along with calculated values in the output file.
What I want to do is to remove that insert sort, and use my own functions. I like to call it insert in order, I want to use three functions for that. I call them (find_index), (insert_at), and (find_index). So, Insert in order works like this:
Insert in order means that you will add a function called find_index that will locate the place in the array that a new value should be inserted. It will then call insert_at, which will move the appropriate, existing elements of the array so that the new element may be inserted in its proper index. So insert_at will be called by find_index, which will be called by insert_in_order. Insert_in_order reads in each new value and calls find_index with it, setting off a chain as described above that ends with each element inserted at the right index so that the array is assembled in order.
Here is an example of what I mean, (the syntax might not be 100% correct though).
While (infile) insertinorder(&infile); void insertinorder (&infile) { double nextval; infile>>nextval; findindex (nextval); } void findindex (nextval) { int i = 0; if (lenth ==0) list[0]=maxval; lenth++; else while (nextval<list(i)) i++ insertat (nextval, i); } void insertat(nextval, i){ int j=0; if (lenth<max){ for (j=lenth; j>i; j--) list[j]= list (j-1); list[i]=nextval; lenth++; } }
Keep in mind that this is just an idea. But if done correctly I am sure it works. Why not use other methods? Because I am still learning c++ and don't no nothing about pointers or templates yet. On the other hand, you'd have to mention the array size, if you don't want to use poitners and that kind of stuff. I don't want to use classes either, yet. So, You can simply read while (infile). Once you get to the end of file it will stop reading. Actually, that would be doing insert in order while (infile). Insert in order then calls find index which calls insert at. Insert at increments the length of the array every time a value is inserted. There is no need for telling the program how large the array is, then, as you can see.
Although not absolutely required, I do like the program to give an error and cease, if it reaches array 200th, so the infile should not contain more than 200 numbers. Again that is not as important as using three functions to sort the numbers. Now, here is my problem, I am sure this must work, but no matter what I do I cannot make this to work with my program(There are too many errors to list). There is no doubt that I am making some mistakes. Any ideas how I can make this thing work?
Here is my source code that works 100%. Right now it is still using insert sort algorithm, and expects the first number infile to be the array size. so, 88 66 10, should be 3 88 66 10, which is what I want changed.
#include <cmath> #include <iostream> #include <fstream> #include <iomanip> #include <string> using namespace std; const int Max_Size = 200; void print (ofstream& outdata, double list[], int lenth); void insertionSort (ofstream& outdata, double list[], int lenth); void median (ofstream& outdata, double list[], int lenth); void mean (ofstream& outdata, double list[], int lenth); void standard_deviation(ofstream& outdata, double list[], int lenth); void readdata(ifstream& indata, double list[], int& lenth, bool& lenthok); int main() { double dataarray[Max_Size]; int datalenth; bool lenthisok; ifstream indata; ofstream outdata; string inputfile; string outputfile; cout<<"Please enter the input file name:"<<endl; cin>>inputfile; indata.open(inputfile.c_str()); if(!indata) { cout << "\n\nCannot open the input file: " << inputfile << "\n\n** Please make sure the input file path and name are correct and try again." << endl; return 1; } cout<<"Please enter the output file name: "<<endl; cin>>outputfile; outdata.open(outputfile); { readdata(indata, dataarray, datalenth, lenthisok); if (lenthisok) insertionSort(outdata, dataarray, datalenth); else cout<<"Lenth of the secret code must be <="<<Max_Size<<endl; print (outdata, dataarray, datalenth); median(outdata, dataarray, datalenth); mean(outdata, dataarray, datalenth); standard_deviation(outdata, dataarray, datalenth); cout<<"\n\nThe input data has been read from: " << inputfile << "\nand the output data was written to: " << outputfile <<endl; } indata.close(); outdata.close(); return 0; } void readdata(ifstream& indata, double list[], int& lenth, bool& lenthok) { int i; lenthok = true; indata>>lenth; if (lenth>Max_Size) { lenthok = false; return; } for (i=0; i<lenth; i++) indata>>list[i]; } void insertionSort(ofstream& outdata, double list[], int lenth) { int firstoutoforder; int location; double temp; for (firstoutoforder=1; firstoutoforder < lenth; firstoutoforder++) if (list[firstoutoforder] < list[firstoutoforder-1]) { temp = list[firstoutoforder]; location = firstoutoforder; do { list[location] = list[location-1]; location--; } while (location > 0 && list[location-1] > temp); list[location] = temp; } } void print (ofstream& outdata, double list[], int lenth) { int i; outdata<<"Using insertion sort algorithm, the elements are:"<<endl; for (i=0; i<lenth; i++) outdata<<list[i]<<" "; } void median (ofstream& outdata, double list[], int lenth) { int middle; double median; middle = lenth / 2; if (lenth % 2==0) { median = (list[middle-1] + list[middle]) / 2; } else median = (list[middle]); outdata<<"\nThe median for the elements entered is:"<<median<<endl; } void mean (ofstream& outdata, double list[], int lenth) { double sum = 0; double average = 0; int i; for (i=0; i<lenth; i++) sum = sum+list[i]; average = sum/lenth; outdata<<"\nThe Mean is:"<<average<<endl; } void standard_deviation(ofstream& outdata, double list[], int lenth) { double sum = 0; double average = 0; double sq_diff_sum = 0; double variance = 0; double diff = 0; int i; double deviation = 0; for (i=0; i<lenth; i++) sum = sum+list[i]; average = sum/lenth; for (i=0; i<lenth; i++) { diff = list[i] - average; sq_diff_sum += diff * diff; } variance = sq_diff_sum/lenth; deviation = sqrt(variance); outdata<<"\nThe Standard Deviation is:"<<deviation<<endl; } //End of code.
Edited by modaresi: Forgot to add something.
|
https://www.daniweb.com/programming/software-development/threads/457663/read-from-a-txt-file-calculate-and-output-to-a-txt-file
|
CC-MAIN-2018-13
|
refinedweb
| 1,056
| 55.78
|
Want to buy fresh solar power led lighting indoor at discounted rates? It is the right place to live your dreams with an extensive array of products for every occasion.More than 1Get a Quote.
import quality everSolareed 15w all-in-one solar street light/ supplied by experienced manufacturers at global sources. ip 65 led all-in-one 6m led double arm solar street light (xd ssl35)s with high brightness and easy installation us$ 250 - 256 / set; 100 sets ce,rohs inquire now save compare chat.Get a Quote
Power(W): 180W: Certification: CCC,CQC,RoHS,CE: Working Temperature(℃):-55℃-+45℃ PF: >0.95: Input voltage(V): 100-240: CRI (Ra>): >70: Warranty: two years: IP Rating: IP67
solar panel lithium battery 50w 100w 150w commercial solar powered street lights. 40w upgraded 36 led solar street lights. 150w pir commercial solar lighting for parking lots. 50000h outdoor 80ra ip65 aluminum 150w roadway led light. hot.Get a Quote
Cooper Lighting LED Post Top, Vintage Garden Light, LED Post Top Area Light manufacturer / supplier in China, offering IP65 Outdoor Street Lighting 70 Watt LED Post Top Lamps for Parks, 2020 Aluminium Die Casting LED Shoebox Street Light with Meanwell Sosen Driver, High Bright 100W 150W 200W 300W LED Module Street Light Factory Direct Price and so on.
bulk wholesale led or induction solar street lights usa, conditionsGet a Quote
Buy Electrical Supplies Online at Platt Electric Supply. Wholesale electrical, industrial, lighting, tools, control and automation products. We are a value added wholesale distribution company that supplies products and services to the electrical, construction, commercial, industrial, utility and datacomm markets.Get a Quote Detection motion within 120 degrees Sensing …
new outdoor garden solar lamp 20w 30w 60w 90w integraged all in one integrated led solar street light product parameters shipping we ship to worldwide best all in one design led solar alarm light wireless street yard 20w 30w 40w 50w 60w 80w 100w 120w solar led street light. 30w 60w 80w 100w 120w 150w solar led street lighting with best price.Get a Quote
Wellesley 11 in. Double Bulb Antique Black Ceiling Flush Mount with 2 Edison 6.5-Watt LED Light Bulbs Included
ip65 waterproof led strip light 5v pir motion sensor smart on/off night lamp 3b. au $10.42. 60/90w 120/180led solar street led light design with sensor radar pir motion sensor wall lamp + au $3.99 shipping. seller 97.4% positive. 20w/40w/60w/90w led solar power outdoor wall street light pir motion sensor lamp. au $21.03. au $42.06 previous price auGet a Quote
Find the cheap Led Floodlight 300w, Find the best Led Floodlight 300w deals, Sourcing the right Led Floodlight 300w supplier can be time-consuming and difficult. Buying Request Hub makes it simple, with just a few steps: post a Buying Request and when it’s approved, suppliers on our site can quote.Get a Quote
Approved: CE and RoHS Certified Stake size: 20 x 2.5 x 2.5cm/7.87 x 0.98 x 0.98 in (L x W x H) Solar panel size: 14 x 8.2 x 2.7cm /5.5 x 3.2 x 1.1 in (L x W x H) Light panel size: 12 x 9.5 x 1.1cm/ 4.7 x 3.7 x 0.4 in (L x W x H) Product Weight: 289g/0.64lbs (accessories Solarluded)Get a Quote
Sunco Lighting 12 Pack Solar Path Lights, Dusk-to-Dawn, 5000K Daylight, Cross Spike Stake for Easy in Ground Install, Solar Powered LED Landscape Lighting - RoHS/CE 4.0 out of 5 stars 396 Tools & Home Improvement
solar street light luminaries . sunmatrix solar home light luminaries are dc operated pole mountable outdoor lighting luminaries' provides lights with low power consumption. these outdoor lighting luminaries are available in cfl and led models with different lamp wattages from 6w to 22w used in standalone pv street light solar powered 30w 60w.Get a Quote
eLEDing EE805W56 - Outdoor Solar Lights, 56 LED Ultra Bright Self-Contained Solar Powered, 180° Motion Sensor Security Wall Light, 80° Rotation, IP65 Waterproof Light for Patio, Yard, Garden, Porch TRUE SECURITY LIGHTING: It can provide all night long lighting to reduce the occurrence of theft to a minimum whenever you are at home or not and
item 2 15 watt solar street light ireland supplier in Calabarzon system (fl-a2-15w) solar energy smart solar light 1 - 15 watt separate solar street light longt lighting group system (fl-a2-15w) solar energy smart solar light $599.99 free shippingGet a Quote
The filter criteria you have selected did not deliver any results Please adjust or reset your filters or return to the Philips Lighting professional product catalog. If you have any questions about specific product configurations, please contact us .Get a Quote
CHEMICAL PLANT SAVES BIG ON ENERGY & MAINTENANCE . OQ Chemicals, global producer of over 70 oxo chemical solvent products used in coatings, lubricants, cosmetics and more, replaced the outdated lighting in their Bishop,TX facility with over 300 Dialight LED fixtures.Get
Alibaba.com offers 2,439 led cooper light products. A wide variety of led cooper light options are available to you, such as lighting solutions service, working time (hours), and material.
solar street solar control led solar power street light lamp factory price intelligent remote control integrated 20watt 40watt 60watt 80watt solar led street light. us $24.30-$26.90 / piece. 1 piece bosun china supplie outdoor waterproof ip65 road 30 watt solar led street lamp. us $34.85 /Get a Quote
Best-Seller:Rechargeable Emergency Light with AC/DC Mode 2x20W/T8 Fluorescent Tubes with test function button-LE269, US $ 6 - 8 / Piece, Zhejiang, China, JIMING / ZN'S OUTDOOR, Fluorescent.Source from Ningbo Jiming Electric Appliance Co., Ltd. on Alibaba.com.Get a Quote
Cooper Lighting LLC Solar 15-Watt LED Solar Power Outdoor Security Flood Light w/ Motion Sensor in White, Size 7"H X 10"W X 7"D | Wayfair MSL180W Cooper Lighting LLC Features:Installs easily with no wiring necessarySolar panel for maximum charging power included6V lead acid battery (included) with over-charge protectionDurable weather-resistant plastic constructioncUL ListedWire length: 196... more
mar 04, 2021 this 12v integrated solar street light, 12v is covered by 1-year limited warranty and this package consists of 8 solar powered led lights. this light comes with the bright and solar-powered led lights which in turn it offers great brightness to your yard. best rated outdoor solar powered spot lights 2021 - top reviews march 4, 2021Get a QuoteGet a Quote
These high capacity cooper street light are powered with bright LEDs that disperse strong sunshine lights on the roads and are durable enough to last for a long time. The cooper street light available here are not just ideal for streets but any outdoor places such as gardens, parks and son, for their immense power. Choose from a variety of models to pick yours.Get a Quote
We can provide Solar Panels ()2,3,5,10,20,30,40,50,160 Watts) Solar Products Solar Flash Lights Solar Garden Lights Solar Water Pumps Solar Road Lights Solar Mobile Chargers Solar Flexible Panels Solar Toys Solar Fan Caps DC ESL Led Bulb (18,27,36,64 LED) Led Lights Batteries (12Volt,7,12,20,24,35,40 AMPS) All other Solar and led solutionGet a Quote
The Lighting Resource: Articles & Blogs Case Studies Press Releases Explore All The Lighting Resource CLS Tools and Apps: Prod/Specs/Housing/Trim Locator, Discont. Specs Incentive, Calculators, Photometric/IES, Revit Light ARchitect App Trellix Developer Portal Explore All CLS Tools and Apps
good heat aluminum control waterproof outdoor panel 30watt 50watt 100watt 150watt solar 95.2%. 5.0 (1) contact supplier. 5-year warranty ip65 1000 lumen diy solar street light on jumia price 100w led solar street light. us $54.00-$108.80 / set. 2.0 sets high lumen bridgelux outdoor ip65 high power smd 20watt 40watt 60watt solar led street light. us $18Get a Quote
|
https://www.disc-zone.pl/82dd2031d1e52917bed725e6433e6cc8
|
CC-MAIN-2022-05
|
refinedweb
| 1,341
| 60.24
|
QStringLiteral explained
QStringLiteral is a new macro introduced in Qt 5 to create QString from string literals. (String literals are strings inside "" included in the source code). In this blog post, I explain its inner working and implementation.
Summary
Let me start by giving a guideline on when to use it:
- Most of the cases:
QStringLiteral("foo")if it will actually be converted to QString
QLatin1String("foo")if it is use with a function that has an overload for
QLatin1String. (such as
operator==, operator+, startWith, replace, ...)
I have put this summary at the beginning for the ones that don't want to read the technical details that follow.
Read on to understand how QStringLiteral works
Reminder on how QString works
QString, as many classes in Qt, is an implicitly shared class. Its only member is a pointer to the 'private' data. The QStringData is allocated with malloc, and enough room is allocated after it to put the actual string data in the same memory block.
// Simplified for the purpose of this blog struct QStringData { QtPrivate::RefCount ref; // wrapper around a QAtomicInt int size; // size of the string uint alloc : 31; // amount of memory reserved after this string data uint capacityReserved : 1; // internal detail used for reserve() qptrdiff offset; // offset to the data (usually sizeof(QSringData)) inline ushort *data() { return reinterpret_cast<ushort *>(reinterpret_cast<char *>(this) + offset); } }; // ... class QString { QStringData *d; public: // ... public API ... };
The offset is a pointer to the data relative to the QStringData. In Qt4, it used to be an actual pointer. We'll see why it has been changed.
The actual data in the string is stored in UTF-16, which uses 2 bytes per character.
Literals and Conversion
Strings literals are the strings that appears directly in the source code, between quotes.
Here are some examples. (suppose
action, string, and
filename are QString
o->setObjectName("MyObject"); if (action == "rename") string.replace("%FileName%", filename);
In the first line, we call the function
QObject::setObjectName(const QString&).
There is an implicit conversion from
const char* to QString, via its constructor.
A new QStringData is allocated with enough room to hold "MyObject", and then the string is
copied and converted from UTF-8 to UTF-16.
The same happens in the last line where the function
QString::replace(const QString &, const QString &) is called.
A new QStringData is allocated for "%FileName%".
Is there a way to prevent the allocation of QStringData and copy of the string?
Yes, one solution to avoid the costly creation of a temporary QString object is to have overload for
common function that takes
const char* parameter.
So we have those overloads for
operator==
bool operator==(const QString &, const QString &); bool operator==(const QString &, const char *); bool operator==(const char *, const QString &)
The overloads do not need to create a new QString object for our literal and can operate directly on the raw char*.
Encoding and QLatin1String
In Qt5, we changed the default decoding for the char* strings to UTF-8. But many algorithms are much slower with UTF-8 than with plain ASCII or latin1
Hence you can use
QLatin1String, which is just a thin wrapper around
char *
that specify the encoding. There are overloads taking
QLatin1String for functions that can opperate or
the raw latin1 data directly without conversion.
So our first example now looks like:
o->setObjectName(QLatin1String("MyObject")); if (action == QLatin1String("rename")) string.replace(QLatin1String("%FileName%"), filename);
The good news is that
QString::replace and
operator== have overloads for QLatin1String.
So that is much faster now.
In the call to setObjectName, we avoided the conversion from UTF-8, but we still have an (implicit) conversion from QLatin1String to QString which has to allocate the QStringData on the heap.
Introducing
QStringLiteral
Is it possible to avoid the allocation and copy of the string literal even for the cases like
setObjectName?
Yes, that is what
QStringLiteral is doing.
This macro will try to generate the QStringData at compile time with all the field initialized. It will even be located in the .rodata section, so it can be shared between processes.
We need two languages feature to do that:
- The possibility to generate UTF-16 at compile time:
On Windows we can use the wide char
L"String". On Unix we are using the new C++11 Unicode literal:
u"String". (Supported by GCC 4.4 and clang.)
- The ability to create static data from expressions.
We want to be able to put QStringLiteral everywhere in the code. One way to do that is to put a
static QStringDatainside a C++11 lambda expression. (Supported by MSVC 2010 and GCC 4.5) (
And we also make use of the GCCUpdate: The support for the GCC extension was removed before the beta because it does not work in every context lambas are working, such as in default functions arguments)
__extension__ ({ })
Implementation
We will need need a POD structure that contains both the QStringData and the actual string. Its structure will depend on the method we use to generate UTF-16.
The code bellow was extracted from qstring.h, with added comments and edited for readability.
/* We define QT_UNICODE_LITERAL_II and declare the qunicodechar depending on the compiler */ #if defined(Q_COMPILER_UNICODE_STRINGS) // C++11 unicode string #define QT_UNICODE_LITERAL_II(str) u"" str typedef char16_t qunicodechar; #elif __SIZEOF_WCHAR_T__ == 2 // wchar_t is 2 bytes (condition a bit simplified) #define QT_UNICODE_LITERAL_II(str) L##str typedef wchar_t qunicodechar; #else typedef ushort qunicodechar; // fallback #endif // The structure that will contain the string. // N is the string size template <int N> struct QStaticStringData { QStringData str; qunicodechar data[N + 1]; }; // Helper class wrapping a pointer that we can pass to the QString constructor struct QStringDataPtr { QStringData *ptr; };
#if defined(QT_UNICODE_LITERAL_II) // QT_UNICODE_LITERAL needed because of macro expension rules # define QT_UNICODE_LITERAL(str) QT_UNICODE_LITERAL_II(str) # if defined(Q_COMPILER_LAMBDA) # define QStringLiteral(str) \ ([]() -> QString { \ enum { Size = sizeof(QT_UNICODE_LITERAL(str))/2 - 1 }; \ static const QStaticStringData<Size> qstring_literal = { \ Q_STATIC_STRING_DATA_HEADER_INITIALIZER(Size), \ QT_UNICODE_LITERAL(str) }; \ QStringDataPtr holder = { &qstring_literal.str }; \ const QString s(holder); \ return s; \ }()) \ # elif defined(Q_CC_GNU) // Use GCC To __extension__ ({ }) trick instead of lambda // ... <skiped> ... # endif #endif #ifndef QStringLiteral // no lambdas, not GCC, or GCC in C++98 mode with 4-byte wchar_t // fallback, return a temporary QString // source code is assumed to be encoded in UTF-8 # define QStringLiteral(str) QString::fromUtf8(str, sizeof(str) - 1) #endif
Let us simplify a bit this macro and look how the macro would expand
o->setObjectName(QStringLiteral("MyObject")); // would expand to: o->setObjectName(([]() { // We are in a lambda expression that returns a QStaticString // Compute the size using sizeof, (minus the null terminator) enum { Size = sizeof(u"MyObject")/2 - 1 }; // Initialize. (This is static data initialized at compile time.) static const QStaticStringData<Size> qstring_literal = { { /* ref = */ -1, /* size = */ Size, /* alloc = */ 0, /* capacityReserved = */ 0, /* offset = */ sizeof(QStringData) }, u"MyObject" }; QStringDataPtr holder = { &qstring_literal.str }; QString s(holder); // call the QString(QStringDataPtr&) constructor return s; }()) // Call the lambda );
The reference count is initialized to -1. A negative value is never incremented or decremented because we are in read only data.
One can see why it is so important to have an offset (qptrdiff) rather than a pointer to the string (ushort*) as it was in Qt4. It is indeed impossible to put pointer in the read only section because pointers might need to be relocated at load time. That means that each time an application or library, the OS needs to re-write all the pointers addresses using the relocation table.
Results
For fun, we can look at the assembly generated for a very simple call to QStringLiteral. We can see that there is almost no code, and how the data is laid out in the .rodata section
We notice the overhead in the binary. The string takes twice as much memory since it is encoded in UTF-16, and there is also a header of sizeof(QStringData) = 24. This memory overhead is the reason why it still makes sense to still use QLatin1String when the function you are calling has an overload for it.
QString returnAString() { return QStringLiteral("Hello"); }
Compiled with
g++ -O2 -S -std=c++0x (GCC 4.7) on x86_64
.text .globl _Z13returnAStringv .type _Z13returnAStringv, @function _Z13returnAStringv: ; load the address of the QStringData into %rdx leaq _ZZZ13returnAStringvENKUlvE_clEvE15qstring_literal(%rip), %rdx movq %rdi, %rax ; copy the QStringData from %rdx to the QString return object ; allocated by the caller. (the QString constructor has been inlined) movq %rdx, (%rdi) ret .size _Z13returnAStringv, .-_Z13returnAStringv .section .rodata .align 32 .type _ZZZ13returnAStringvENKUlvE_clEvE15qstring_literal, @object .size _ZZZ13returnAStringvENKUlvE_clEvE15qstring_literal, 40 _ZZZ13returnAStringvENKUlvE_clEvE15qstring_literal: .long -1 ; ref .long 5 ; size .long 0 ; alloc + capacityReserved .zero 4 ; padding .quad 24 ; offset .string "H" ; the data. Each .string add a terminal '\0' .string "e" .string "l" .string "l" .string "o" .string "" .string "" .zero 4
Conclusion
I hope that now that you have read this you will have a better understanding on where to use and not to use
QStringLiteral.
There is another macro QByteArrayLiteral, which work exactly on the same principle but creates a QByteArray.
Update: See also the internals of QMutex and more C++11 features in Qt5.
Woboq is a software company expert in development and consulting around Qt and C++. Hire us!
If you like this blog and want to read similar articles, consider subscribing via our RSS feed, by e-mail or follow us on twitter or add us on G+.
Article posted by Olivier Goffart on 21 May 2012
Read About Woboq, use our Software Development Services, check out our Products, like the Code Browser, or learn new things about software development on our Blog.
|
http://woboq.com/blog/qstringliteral.html
|
CC-MAIN-2016-18
|
refinedweb
| 1,575
| 53.81
|
Important: Please read the Qt Code of Conduct -
QOpenGLWidget 5.5, OpenGL 4.1, MacOS X, Intel HD Graphics 5000 1536 MB
Hi,
I figured Game Dev would be the best place to post this request:
Is anyone out there able to use QOpenGLWidget (or QOpenGLWindow for that matter) with OpenGL 4.1 with Qt 5.5? If so, perhaps you are willing to provide an extremely simple example project? A triangle with frag and vertex shaders would be all that I would need.
I have tried so many configurations, profiles, formats, drawing methods, etc. that I am at a complete loss and my code is getting more and more scrambled as I become more frustrated.
Example or not, any tips would be much appreciated.
Thanks!
P.S. I've had no problems with OpenGL 2.0/GLSL 110. I'm primarily seeking the instancing functions introduced in later versions.
Hi
I'm working on some OpenGL tutorials, I add one with a triangle with opengl 4.1. Check here , it works for me.
Welcome a board.
Hi cccl and Johngod,
First of all thanks Johngod for your great tutorial!
Then, regarding to cccl issue, it would be grateful if you could update the main files in your tutorial by setting the context default format in order to compile shaders 4+, below how should the main file looks:
#include "mainwindow.h" #include <QApplication> #include<QSurfaceFormat> int main(int argc, char *argv[]) { QApplication a(argc, argv); QSurfaceFormat format; format.setDepthBufferSize(24); //if you need it format.setStencilBufferSize(8); //if you need it format.setVersion(4, 1); // OpenGL version format.setProfile(QSurfaceFormat::CoreProfile); QSurfaceFormat::setDefaultFormat(format); MainWindow w; w.show(); return a.exec(); }
Cheers, Nai
@Alharbi
Thanks for your comments. Perhaps you miss it but I have that code in the MainWindow ctor. However your sugestion makes sense, since according the docs, "it is simpler and more robust to set the requested format globally so that it applies to all windows and contexts during the lifetime of the application". So I have updated it to main.cpp.
Some notes, could not make the core profile work in linux not sure why, and in windows both profiles seem buggy. I found some reported opengl bugs on bugtracker that could be the reason,or maybe it's just something that I'm overlooking in my code.
|
https://forum.qt.io/topic/63581/qopenglwidget-5-5-opengl-4-1-macos-x-intel-hd-graphics-5000-1536-mb
|
CC-MAIN-2022-05
|
refinedweb
| 392
| 66.84
|
Gwibber blank not displaying anything for Facebook
Bug Description
Ubuntu 13.04 Upgrading package python-imaging 1.1.7-4 to python-imaging 1.1.7+1.7.8-1 causes gwibber to display nothing. I uninstalled python-imaging 1.1.7+1.7.8-1 and re-installed python-imaging 1.1.7-4 and now it's working perfectly again. I locked the python-imaging 1.1.7-4 so it would not upgrade and everything is fine again.
Terminal output with upgraded package:
Traceback (most recent call last):
File "/usr/bin/
from gwibber.
File "/usr/lib/
from gwibber.
File "/usr/lib/
import Image
ImportError: No module named Image
Ubuntu 13.04 64 Bit Gwibber 3.6
|
https://bugs.launchpad.net/gwibber/+bug/1113132
|
CC-MAIN-2014-15
|
refinedweb
| 119
| 56.42
|
I have multiple point feature class (las_1.shp, las_2.shp....._711) into one folder. I am trying to write a python script to perform IDW interpolation which will loop all the point feature class and outraster will save as it name of input raster.
Each Point feature class hold one Z filed only (Altit).
The problem is that after interpolation some of the generated raster are saved, however, with an error (all black). I think that the problem is in the part of save the raster (os.path.join) because a execute the script without this part and the interpolation works perfectly. I'm using Python 2.7.
Below is my script, what i have so far
import arcpy
from arcpy import env
from arcpy.sa import *
import os
arcpy.env.parallelProcessingFactor = "100%"
arcpy.env.overwriteOutput = True
# settings
env.workspace = r"C:\Users\pedro\Documents\Mestrado\GEO\Pontos_Laser\TESTE"
out_folder = r"C:\Users\pedro\Documents\Mestrado\GEO\Pontos_Laser\RASTER"
zField = "Altit" # has to be string, not list
points = arcpy.ListFeatureClasses("las_*.shp", "POINT")
print points
cellSize = 5
power = 2
searchRadius = RadiusVariable(12, 5)
arcpy.CheckOutExtension("Spatial")
# Execute IDW
for fc in points:
print fc
outIDW = Idw(fc, zField, cellSize, power, searchRadius)
#Save output raster to output workspace
tifname = fc[:7]
# Save output
IDW_OUT= os.path.join(out_folder, '{}.tif'.format(tifname))
print IDW_OUT
outIDW.save(IDW_OUT)
print 'done'
From your print statements, this must be in arcmap/python 2.7?
Also, your indentation is off and line numbers would be a help
Code Formatting... the basics++
|
https://community.esri.com/thread/257724-idw-interpolatation-using-arcpy
|
CC-MAIN-2020-40
|
refinedweb
| 255
| 51.24
|
Have you ever been CONFUSED by creating make files? Have you ever WONDERED which files had to be rebuild because of changes in make files or header files? Have you ever thought about how CONCURRENT your make files actually are? Have you ever been disturbed by DISTRIBUTING IMPLEMENTATION DETAILS about a particular .c- or .h-file over multiple makefiles? Have you ever thought how concurrency could really work while invoking make MULTIPLE (or many) times and how that puts a lot of OVERHEAD into a build process? Have you ever found it hard to MODULARIZE your make files? Have you ever QUESTIONED the sense of make files in general?
THAN THIS BLOG POSTING IS FOR you :-)
Warning: When coming down to creating, maintaining or compiling stuff, I know I am different, so be warned :*) I did briefly explain to others how I felt things should work ... and did hear from some that I am odd, while others seemed to like it ... anyway, I do believe in and successfully tried the below approach in my home projects and am currently playing with it while experimenting with packaging ... but judge yourself :-)
A Start: Let's start with simple things, in my little world I mostly have some .c- and .h-files, while I typically want to create some .o-, .bin- and .so- respectively lib*.so-files. Some files need special flags, e.g. for optimization or debugging. To understand how we can automate this - thus abandoning the makefile burden ideally completely - we may want to take a look at a typical C include.
A simple include looks like this:
foo.c: #include "a.h"
Compiling: For the corresponding .o-file (foo.o) this means, that it needs to be remade in case foo.c, a.h or any of the transitive includes of a.h changes.
Linking: To be able to link foo.o into an executable or shared library, we also need to link-in any objects or shared libraries implementing a.h (or its includes respectively).
Sample Code: Let's say, a.h only declares what's implemented in a.c, such that a.h does not include anything else. Unfortunately a.c includes (despite a.h) b.h as well as x.h .
a.c: #include "a.h" #include "b.h" #include "x.h"
Luckily, b.h does not include anything itself, but b.c does, additionally to b.h it also includes c.h, which is implemented in c.c:
b.c: #include "b.h" #include "c.h" c.c: #include "c.h"
And if this had not been enough, a.c respectively a.o does also need the implementation of x.h to be properly linkable, which is not directly implemented in a .o-file but a linkable shared library named libx.so.
Looking into x.c
x.c: #include "x.h" #include "y.h" #include "d.h"
we see, that itself needs another library (liby.so) and at least one other .o-file (d.o). And foo.bin is not the only executable we would like to link, but bar.bin as well. bar.bin is partly based on the same objects as foo.bin, but links differently, I leave the details out here, please have a look at the bottom for all sample files.
And it's Makefile: Expressing this in GNU make looks like this:
============= handcraft.mk =============: LDFLAGS+=-Wl,-rpath='$$ORIGIN/' -L. a.o: a.c a.h b.h c.h x.h $(COMPILE.c) $(OUTPUT_OPTION) $< b.o: b.c b.h c.h $(COMPILE.c) $(OUTPUT_OPTION) $< c.o: c.c c.h $(COMPILE.c) $(OUTPUT_OPTION) $< x.o: x.c x.h y.h d.h $(COMPILE.c) $(OUTPUT_OPTION) $< y.o: y.c y.h $(COMPILE.c) $(OUTPUT_OPTION) $< libx.so: x.o liby.so $(LINK.o) $(LOADLIBES) $(LDLIBS) -shared $^ -L. -ly -o $@ liby.so: y.o $(LINK.o) $(LOADLIBES) $(LDLIBS) -shared $^ -o $@ foo.o: foo.c a.h $(COMPILE.c) $(OUTPUT_OPTION) $< foo.bin: foo.o a.o b.o c.o libx.so $(LINK.o) $(LOADLIBES) $(LDLIBS) $^ -o $@ bar.o: bar.c b.h c.h d.h $(COMPILE.c) $(OUTPUT_OPTION) $< bar.bin: bar.o b.o c.o d.o $(LINK.o) $(LOADLIBES) $(LDLIBS) $^ -o $@
Now we can build the concrete targets by invoking make as:
> make -f handcraft.mk foo.bin > make -f handcraft.mk bar.o > make -f handcraft.mk libx.so > make -f handcraft.mk -j foo.bin bar.bin > make -f handcraft.mk ...
Automation: By increasing the number of targets and prerequisites such a make file can become pretty fast pretty complicated - after all building software properly is not an easy task.
Automatic dependency generation for include files is well known and has been done for long, what's missing to become fully automatic are compilation flags in particular and linking support in general. The point is, that only the moment building becomes fully automatic, make can be modularized reasonably and we can build a whole project invoking make once only (which is desirable :-).
So, let's see what information regarding linking has been implicitly expressed in the above makefile:
The Rule: In principle, the rule seems to be: If you include a header, ensure that you link-in the implementing .o-files and shared libraries, as well as the required .o-files and shared libraries of these .o-files etc.
Cohesion: The hints regarding required .o-files, shared libraries and c-flags should be placed in the appropriate files and need to be available for make during building, otherwise this information gets repeated in multiple targets and makefiles again and again - which to my experience is typical ;*)
Concrete: If we have a header file which requires a particular object file to be linked-in, we can (initially hackily :-) express this directly in-line, prefixing it with a "magic" word, e.g. as:
//§ req_objs:=a.o
We may want to do the same with libraries:
//§ req_libs:=libx.so
And even compilation flags may be expressed this way:
//§ cflags:=-g
Applying the schema to the above files, we get the following additional lines:
foo.c: //§ cflags=-g
a.h: //§ req_objs:=a.o
b.h: //§ req_objs:=b.o
x.h: //§ req_libs=libx.so
...
Now, the only think left is to write some generic make "modules" to utilize this now explicit available information ... this is so obvious, that I leave that as a homework for the patient reader ... just joking :-) please have a look below.
Conclusion: By now we have developed a simple schema for linking binaries and shared libraries based on C files, which allows us to build any final or intermediate target fully automatically, expressing linking relevant information in a cohesive way, while eliminating the need to hassle with make-files ever again.
Scalability: I have to admit that I did not yet test how scalable this approach is, in principle it should scale quite well ... and after all we are only talking about some thousand files for typical projects. At least for my home projects with some hundred files it works quite nicely.
Reasoning: The reason I brought this up is, that I believe that anything can be build fully automatically this way - even installation packages I am currently experimenting with.
In Principle: The overall rule set to build any kind of derived file fully automatically (and not only compiling C sources :-), seems to be something along the following lines:
Note
on -7-: As a derivation may not depend on its primary prerequisite only,
in practice some functions need to be executed at runtime to determine all
secondary prerequisites. Actually, the concrete function to be calculated depends on
the actual explicit rule selected for the creation of the derivation.
As ordinary makes (e.g. GNU make), do not offer a way to add prerequisites to
a derivation after an explicit rule has been selected, statement -7-
has to be adapted for these to work:
-7- For any type of derivation there is exactly one explicit rule.
Therefore, to still enable deriving one type from different types of prerequisites, we need to mangle in the prerequisites type.
Finally: If there is interest and people think that this general approach seems to be reasonable, I am going to explain in more detail how it actually works and what else could be done to make a builders life easier ... and I may even try it out on a concrete OOo module, may be SAL :-)
Last but not least I have to admit, that I am a descent GNU make fan :-)
Please find sample files etc. below.
Best regards
Samples and make files for GNU make, tried on Debian Linux, just copy&paste the stuff into the named files ...
============= foo.c =============: //§ cflags:=-g #include "a.h" int main(void) { return 0; } ============= bar.c =============: //§ cflags:=-O #include "b.h" #include "c.h" #include "d.h" int main(void) { return 0; } ============= a.h =============: //§ req_objs=a.o ============= a.c =============: #include "a.h" #include "b.h" #include "x.h" ============= b.h =============: //§ req_objs=b.o ============= b.c =============: #include "b.h" #include "c.h" ============= c.h =============: //§ req_objs:=c.o ============= c.c =============: #include "c.h" ============= d.h =============: //§ req_objs=d.o ============= d.c =============: #include "d.h" ============= x.h =============: //§ req_libs=libx.so ============= x.c =============: #include "x.h" #include "y.h" #include "d.h" ============= y.h =============: //§ req_libs:=liby.so ============= y.c =============: #include "y.h" ============= generic.mk =============: # The master makefile. # Preserve intermediate files. .PRECIOUS: %.o # To find linked libraries relatively. LDFLAGS+=-Wl,-rpath='$$ORIGIN/' -L. include functions.mk # Some helpers. include info_c.mk # Create C infos. include o_c.mk # Create C objects. include bin_o.mk # Create object binaries. include libso_o.mk # Create object shared libraries. $(foreach target,$(MAKECMDGOALS),$(call $(suffix $(target))_spreqs,$(target))) ============= functions.mk =============: # Guard includes. define ginc_t ifndef $(1)_inc_def $(1)_inc_def=:1 -include $(1) endif endef ginc=$(eval $(ginc_t)) # Calculate the referential transitive closure. define closure_t ifndef $(1)$(2)_cls_def $(1)$(2)_cls_def:=1 ifdef $(suffix $(1))_info $$(call $(suffix $(1))_info,$(1)) else $$(call ginc,$(1).info) endif $(1)$(2)_cls:=$$(strip $$($(1)$(2)) $$(foreach file,$$($(1)$(2)),$$(call closure,$$(file),$(2)))) endif endef # $(1) = file name # $(2) = variable postfix closure=$(eval $(closure_t)) $($(1)$(2)_cls) ============= info_c.mk =============: # Rules and prerequisits for .info files . define info_script echo Making $@ echo "`grep ^//§ $<|sed s0//§\ 0$<_0g)`" > $@ echo "$<_includes:=`echo \`grep \#include $<|sed s0\#include00g|sed s0\\\"00g\``" >> $@ endef %.info: % @$(info_script) ============= o_c.mk =============: # Rules and prerequisits for .o files, as well as .o.info . define t_o_info ifndef $(1)_info_def $(1)_info_def:=1 t_o_info_ppreq_$(1):=$(1:.o=.c) $$(call ginc,$$(t_o_info_ppreq_$(1)).info) t_o_info_all_includes:=$$(call closure,$$(t_o_info_ppreq_$(1)),_includes) $(1)_req_objs:=$$(subst $(1),,$$(foreach inc,$$(t_o_info_all_includes),$$($$(inc)_req_objs))) $(1)_req_libs:=$$(foreach inc,$$(t_o_info_all_includes),$$($$(inc)_req_libs)) endif endef .o_info= $(eval $(t_o_info)) define t_o_spreqs ifndef t_o_spreqs_$(1)_def t_o_spreqs_$(1)_def:=1 t_o_spreqs_ppreq_$(1):=$(1:.o=.c) $(1): $$(call closure,$$(t_o_spreqs_ppreq_$(1)),_includes) $(1): cflags_:=$$($$(t_o_spreqs_ppreq_$(1))_cflags) endif endef .o_spreqs=$(eval $(t_o_spreqs)) %.o: %.c $(COMPILE.c) $(OUTPUT_OPTION) $(cflags_) $< ============= bin_o.mk =============: # Rules and prerequisits for .bin files . define t_bin_spreqs ifndef t_bin_spreqs_$(1)_def t_bin_spreqs_$(1)_def:=1 t_bin_spreqs_$(1)_ppreq:=$(1:.bin=.o) $$(call .o_spreqs,$$(t_bin_spreqs_$(1)_ppreq)) t_bin_spreqs_$(1)_objs:=$$(call closure,$$(t_bin_spreqs_$(1)_ppreq),_req_objs) $$(foreach obj,$$(t_bin_spreqs_$(1)_objs),$$(call .o_spreqs,$$(obj))) t_bin_spreqs_$(1)_libs:=$$(subst $(1),,$$(foreach obj,$$(t_bin_spreqs_$(1)_ppreq) $$(t_bin_spreqs_$(1)_objs),$$($$(obj)_req_libs))) $$(foreach lib,$$(t_bin_spreqs_$(1)_libs),$$(call .so_spreqs,$$(lib))) $(1): objs_:=$$(t_bin_spreqs_$(1)_objs) $(1): libs_:=$$(t_bin_spreqs_$(1)_libs) $(1): $$(t_bin_spreqs_$(1)_objs) $(1): $$(t_bin_spreqs_$(1)_libs) endif endef .bin_spreqs=$(eval $(t_bin_spreqs)) %.bin: %.o $(LINK.o) $(LOADLIBES) $(LDLIBS) $< $(objs_) $(patsubst lib%.so,-l%,$(libs_)) -o $@ ============= libso_o.mk =============: # Rules and prerequisits for lib*.so files . define t_so_spreqs ifndef t_bin_spreqs_$(1)_def t_so_spreqs_$(1)_def:=1 t_so_spreqs_$(1)_ppreq:=$(patsubst lib%.so,%.o,$(1)) $$(call .o_spreqs,$$(t_so_spreqs_$(1)_ppreq)) t_so_spreqs_$(1)_objs:=$$(call closure,$$(t_so_spreqs_$(1)_ppreq),_req_objs) $$(foreach obj,$$(t_so_spreqs_$(1)_objs),$$(call .o_spreqs,$$(obj))) t_so_spreqs_$(1)_libs:=$$(subst $(1),,$$(foreach obj,$$(t_so_spreqs_$(1)_ppreq) $$(t_so_spreqs_$(1)_objs),$$($$(obj)_req_libs))) $$(foreach lib,$$(t_so_spreqs_$(1)_libs),$$(call .so_spreqs,$$(lib))) $(1): objs_:=$$(t_so_spreqs_$(1)_objs) $(1): libs_:=$$(t_so_spreqs_$(1)_libs) $(1): $$(t_so_spreqs_$(1)_objs) $(1): $$(t_so_spreqs_$(1)_libs) endif endef .so_spreqs=$(eval $(t_so_spreqs)) lib%.so: %.o $(LINK.o) $(LOADLIBES) $(LDLIBS) -shared $< $(objs_) $(patsubst lib%.so,-l%,$(libs_)) -o $@Now the generic make file be may invoked in the same way as the handcrafted:
> make -f generic.mk foo.bin > make -f generic.mk bar.o > make -f generic.mk libx.so > make -f generic.mk -j foo.bin bar.bin > make -f generic.mk ...
tags: automatic build building gnumake make openoffice openoffice.org packaging
|
http://blogs.sun.com/GullFOSS/entry/and_what_about_make
|
crawl-002
|
refinedweb
| 2,085
| 61.22
|
checkButtonScaled editbll 2016-05-10: The standard ttk::checkbutton does not scale well for 4K displays. This code is a pure tcl implementation of checkbuttons with no graphics. It includes styling for all 22 themes. I have attempted to match each theme's styling reasonably well. The package will change themes on a <<ThemeChanged>> event.Place the package require in your code after tk scaling has been set.Download from: 1.19 2018-1-6
- Fixed scaling/display problems with the checkmarks.
Version 1.18 2017-9-14
Version 1.17 2017-8-19
Version 1.16 2016-8-16
Version 1.14 2016-7-4
Version 1.13 2016-7-2
Version 1.12 2016-6-14
Version 1.11 2016-6-9
Version 1.10 2016-6-3
Version 1.9 2016-5-16
Version 1.8 2016-5-15
Version 1.7 2016-5-11
Version 1.6 2016-5-10
- Fixed issues with 'default' (et.al.) themes.
- Changed to scale with the size of the configured font. Since the font should be scaled up properly by tk scaling, this should work out ok.
- Removed propagateBinds routine. Binds placed on the main container are now executed properly.
- Fixed focus state reporting.
- Change spacer to be a frame
- Better handle Mac OS X's crazy amount of extra label padding (fixed in 8.6.6).
- Fixed focus issues.
- Fixed highlight issues.
- Fixed space bar toggle for tab-in.
- keep disabled background color the same as the background color.
- fixed trace on configure of same variable name.
- improved efficiency
- added changeColor proc - supports other background colors.
- force theme change if it hasn't been done yet.
- Clean up leftover default variables from the global namespace.
- Fixed a problem with leftover traces.
- Fixed destroy cleanup of variables.
- Fixed scaling.
- Improved size of indicator character.
- Added propagateBinds routine
- Handle -style option -- loads styling * force theme change if it hasn't been done yet.from the ttk::style
- Was missing -font
- Fixed a bug for configure -variable
|
http://wiki.tcl.tk/44212
|
CC-MAIN-2018-05
|
refinedweb
| 335
| 72.63
|
How do you do mathematical operation on a matrix i.e on the elements of matrix?
So this is the recipe on how we can subtract something to each electment of a matrix.
We have imported numpy which is needed.
import numpy as np
We have calculated a matrix on which we will perform operation.
matrixA = np.array([[2, 3, 23],
[5, 6, 25],
[8, 9, 28]])
We have made a function that will substract a number in this case 15 to every element of matrix. Finally we have printed the matrix.
add_100 = lambda i: i - 15
vectorized_add_100 = np.vectorize(add_100)
print(vectorized_add_100(matrixA))
So the output comes as
[[-13 -12 8] [-10 -9 10] [ -7 -6 13]]
|
https://www.projectpro.io/recipes/subtract-numerical-value-each-element-of-matrix
|
CC-MAIN-2021-43
|
refinedweb
| 118
| 67.45
|
MongoDB has consistently been ranked among the top five most popular NoSQL Database Management Systems (DMS). If you’re applying for a position involving database management, then you stand a good chance of working with MongoDB at some point.
Whether you’re upskilling or interviewing for a Database Management position working with MongoDB, it’s prudent to be familiar with the more common MongoDB questions and answers. In the latter case, it pays to prepare for that interview and not get caught unawares. But even if you’re just interested in learning about MongoDB for the purposes of upskilling, bare in mind that acquiring this kind of knowledge could very well lead you into looking for that better career, and that means you’ll end up facing an interview sooner or later. Might as well be ready!
Whatever the case, you should check these top MongoDB interview questions and answers, and be ready for anything. Even if you’re an experienced MongoDB user, it pays to polish up your knowledge base.
Master the skills of data modeling, ingestion, query and sharding with the MongoDB Developer and Administration Certification training course.
MongoDB is a cross-platform document-oriented database program that offers high performance, high availability and easy scalability. It is considered the leading NoSQL database.
A NoSQL database offers a mechanism for storage and retrieval of data meant to respond to the unique demands and challenges of today’s modern applications. NoSQL, which stands for “Not Only SQL” is a departure from the tabular relations used in relational databases such as SQL and Oracle.
Types of NoSQL databases:
MongoDB is the first type, a document-oriented database, storing data in Binary JSON (BSON) structure-based documents, which in turn are stored in a collection.
MongoDB is a Relational Database Management System (RDBMS) replacement for web applications, giving users the additional partition tolerance that RDMS doesn’t provide. MongoDB is suitable for real-time analytics and high-speed logging and is highly scalable. It’s perfect for handling unstructured, complicated, and messy data.
MongoDB is used by an impressively wide variety of organizations today, including Google, Cisco, Verizon, Nokia, and Facebook, to name a few.
MongoDB provides a highly flexible and scalable document structure. For instance, one data document in MongoDB can have six columns while other documents in the same collection can have a dozen columns. Furthermore, thanks to efficient indexing and storage techniques, MongoDB databases are faster in comparison with other SQL databases.
The 32-bit edition has a 2GB data limit (it’s actually 4GB, but 2GB is taken up by the OS). If this limit is somehow exceeded, it will corrupt the entire database and the existing data. This bug doesn’t exist in the 64-bit, and this is the version recommended if you’re doing any serious application work.
MongoDB allows very fast writes and updates by default, but as a result, you are not explicitly notified of any failures. By default, most drivers do asynchronous, ‘unsafe’ writes, meaning that the driver does not return an error directly, like as in the case of INSERT DELAYED with MySQL. If you want to know if something succeeded, you have to personally check for errors using the get last error function. If there is a server crash or power failure, all changes buffered in the memory will be lost. Although this functionality can be disabled, it will reduce performance to the equivalent of MySQL, and possibly worse.
MongoDB is ideal only for implementing things like analytics/caching, where the impact of small data loss is negligible.
MongoDB makes it difficult to represent relationships between data, as JOINS are not possible with it. As a result, you end up performing the task manually by creating another table to represent the relationship between rows in two or more tables.
The most important MongoDB features are:
MongoDB comes with a database profiler that shows each operation’s performance characteristics against the database. Using the profiler, you can analyze all of the queries which are being run by the DB system in question. This information can then be used for determining when an index is called for.
The languages MongoDB uses are, in alphabetical order:
A namespace is the combination (often referred to as the concatenation) of the database name and the collection or index name. For instance, “students.girls” would indicate the database was called “students”, and the collection is “girls”.
Sharding is the procedure of storing data on multiple machines. It’s how MongoDB deals with the issue of increased data growth demands. This data storage method involves the horizontal partitioning of data in a search engine or database; each partition is known as a shard or a database shard.
This is an acronym for Create, Read, Update, and Delete the basic MongoDB operations.
Yes. Removing a document from the database results in it being removed from the disk as well.
The process of synchronizing data across multiple servers is known as replication. The process increases data availability and offers redundancy, which in turn protects the database from the loss of a single server. Thus, replication allows for faster recovery from service interruptions or hardware failures.
The points you need to take into consideration when creating schema are:
While both databases are free and open-source, there are some significant divergences. For instance, MySQL represents data in the form of rows and tables, while the data in MongoDB is represented as collections of JavaScript Object Notation (JSON) documents.
Furthermore, when performing a query, strings get put into the query language that the database system then parses. In fact, the language is called Structured Query Language (SQL) and yes, that’s where MySQL gets its a name.
MongoDB uses object querying; you just pass a document to MongoDB detailing what you’re querying; no parsing at all.
In terms of transactions, MySQL supports atomic transactions, which are defined as having the ability to run multiple operations within one transaction; on the other hand, MongoDB doesn’t support transactions; the single operation is atomic.
When using MySQL, you need to define the schema, but MongoDB doesn’t require it. So in the former case, you need to define the columns and tables, whereas in the latter case, you just drop in the documents and you’re good. If you are keen on conducting performance testing and analysis, then MySQL is your best bet, since MongoDB has no reporting tools.
Finally, in MySQL, the relational database is the only JOIN operation which permits conducting queries across multiple tables. In contrast, MongoDB doesn’t support the JOIN operation, but it can support multi-dimensional data types. As a result of this lack of JOIN support, MongoDB performs better than databases like MySQL.
Objectld is made up of a:
Whether you’re preparing for an interview or simply upskilling, Simplilearn offers you the training you need to advance your career in MongoDB. With the MongoDB Developer and Administrator Certification training course, you can become an expert MongoDB developer and administrator, gaining an in-depth knowledge of NoSQL and mastering skills of data modeling, ingestion, query, sharding, and data replication.
The course is offered as self-paced learning, online classroom Flexi-Pass, or as a corporate training solution, and includes 32 hours of instructor-led training, three industry-based projects in e-learning and telecom domains, 17 hours of self-paced video, and a half-dozen lab exercises conducted on a virtual machine.
The course is best suited for database administrators, software developers, system administrators, and analytics professionals. Once you complete an assigned project and pass the online certification exam, you will be a certified MongoDB professional.
Take a look at what Simplilearn can do for you before you take the first step into this challenging but rewarding.
MongoDB Developer and Administrator
*Lifetime access to high-quality, self-paced e-learning content.Explore Category
Project Management Interview Guide
The Essential Guide to Cloud Database Management for the Modern Enterprise
Top 90+ AWS Interview Questions and Answers in 2020
Kubernetes Interview Guide
The Rise of NoSQL and Why it Should Matter to You
Top Git Interview Questions and Answers for 2020
|
https://www.simplilearn.com/mongodb-interview-questions-and-answers-article
|
CC-MAIN-2020-40
|
refinedweb
| 1,365
| 51.18
|
100 Days of Algorithms is a series of Medium posts and Jupyter Notebooks by Tomáš Bouda that implement 100 interesting algorithms. They're a programming exercise that Bouda set for himself: can he implement 100 interesting algorithms, one per day?
The answer was “yes.” The algorithms range from classics like Towers of Hanoi to Bloom filters and graph traversal. Over the coming weeks, we’ll be featuring selections from Bouda's 100 Days of Algorithms project here on O’Reilly.
Day 57, Quicks.
Here's Bouda's Medium post, and you can access and clone the Jupyter Notebook here.
import numpy as np
algorithm
def swap(data, i, j): data[i], data[j] = data[j], data)
def qsort(data): qsort3(data, 0, len(data) - 1)
run
data = np.random.randint(0, 10, 100) print]
qsort(data) print]
Technical notes).
The easiest way to install Jupyter Notebooks is to use Anaconda. The second easiest (and most bulletproof) way is to install Docker and then use the scipy-notebook container.
If you're rolling your own Jupyter environment, you need:
Python 3.5 (a few of the “days” require 3.6; most will work with 3.4)
-
-
-
-
-
-
|
https://www.oreilly.com/ideas/implementing-the-quicksort-algorithm
|
CC-MAIN-2018-39
|
refinedweb
| 195
| 67.35
|
Pixie::Complicity - making things play well with pixie
Complicity: <<defintion>>
For many objects, Pixie can and does store the object transparently with no assistance from the object's class. However, sometimes that's just not the case; most commonly in the case of classes that are implemented using XS, and which store their data off in some C structure that's inaccessible from Perl. Getting at such information without the complicity of the class in question would require Pixie to be, near as dammit, telepathic. And that's not going to happen any time soon.
So, we provide a set of methods in UNIVERSAL, which are used by Pixie in the process of storing and fetching objects. All you have to do is override a few of them in the class in question. (Remember, even if you're using a class from CPAN, the class's symbol table is always open, so you can cheat and add the helper methods anyway, we've chosen a method namespace (all methods begin with px_) which we hope doesn't clash with any classes that are out there, in the wild.
Consider the
Set::Object class.
It's a very lovely class,
implementing a delightfully fast set,
with all the set operations you'd expect.
However,
in order to get the speed,
it's been implemented using XS,
and the Data::Dumper visible part of it is simply a scalar reference.
So,
if we want to use Set::Object in our project (and we do),
we need to make it complicit with Pixie.
So, first we make sure that Pixie knows it's storable:
sub Set::Object::px_is_storable { 1 }
Then we think about how we're going to render the thing storable. The only important thing about a set, for our purposes, is the list of its members (and what do you know, Set::Object provides a
members method to get at that). We'll press the 'memento' pattern into use. The idea is that we create a memento object which will store enough information about an object for that object to be recreated later. We set up Set::Object's
px_freeze method to create that memento:
sub Set::Object::px_freeze { my $self = shift; return bless [ $self->members ], 'Memento::Set::Object'; }
Easy. For our next trick, we need to provide some way for a memento to be turned back into an object. Pixie guarantees to call
px_thaw on every object that it retrieves from the data store, so, all we have to do is implement an appropriate
px_thaw method in the memento class.
sub Memento::Set::Object::px_thaw { my $self = shift; return Set::Object->new(@$self); }
And, as if by magic, Set::Objects can now be happily persisted within your Pixie.
Pixie puts a lot of methods into UNIVERSAL, because that's where the behaviour makes the most sense. Some of these methods are useful to override when you need to help Pixie out with object storage; others are useful when you're writing the tools that use Pixie (but we haven't actually added many of those yet) and still others are almost certainly never going to be overridden by client code, but we'll document them just in case. We start with the 'storage helper' methods that you are most likely to override:
A boolean method. By default, Pixie thinks only HASH and ARRAY based objects are storable. If you have a class that you want to make persistent, and it doesn't use one of these representations, then just add
sub px_is_storable { 1 } to your class definition.
Called by Pixie on every object that it stores,
px_freeze transforms an object into something a little more... storable. Remember, px_freeze operates on the 'real' object, not a copy. Generally you should create a new object in some memento class, dump the storable state into it and return the memento. (Of course, if px_thaw just gets rid of some cached computations, you might prefer to operate directly on the object).
Called by Pixie on every object that it retrieves from the store. Use this to turn memento objects back into the real thing.
NB: If your
px_freeze blesses an object into a seperate memento class then remember to implement
px_freeze in the memento class, not the source class.
Another boolean. Used by Pixie to know whether an object in this class should be immediately fetched in cases where Pixie would normally use a Pixie::Proxy object to provide deferred loading. You generally want to use this for objects that get accessed directly (you naughty encapsulation violator you), because a Pixie::Proxy only fetches the real thing when it notices a method call to the object.
Returns an unblessed HASH/ARRAY/SCALAR ref which is a shallow clone of the object in question.
Sometimes you can get away without having to write
px_freeze and
px_thaw. Say you have a hash based object, and some of its keys are the cached (large) results of an expensive computation, which can be entirely derived from the 'real' instance variables. So, to strip those out of the stored object, you could do the following:
sub px_as_rawstruct { my $self = shift; {@$self{grep !/^cached_/, keys %$self}} }
Aren't hash slices lovely?
Class method. Returns an empty object in the given class. The default implementation of this does
$class->new(). We do this so that the class can 'know about' its instance (some classes like to initialize various static variables etc...) but, if your class's 'new' method doesn't cope with an empty argument list, you could override this method. (I'm thinking of adding a 'px_post_populating_hook' method, which would be called after pixie has populated an object. Useful for those classes whose 'new' methods require arguments and then call an init method to set up stuff based on the instance variables...)
|
http://search.cpan.org/~spurkis/Pixie/lib/Pixie/Complicity.pm
|
CC-MAIN-2014-52
|
refinedweb
| 972
| 58.62
|
Hi Volodya, Thanks for your great help. I have tried those two files(those are very visual :). But just as you mentioned: is it possible to register the corresponding Python class for custom C++ class by an explicit way before Invoke PyRun_String? Best Regards, Martin "Vladimir Prus" <ghost at cs.msu.su> ???? news:d4neu4$qas$1 at sea.gmane.org... > Martin Dai wrote: > > > Hi everyone, > > > > Now I have solved the pass variable when the variable is simple > > type, > > but when I bind the C++ class object into the dictionary, it compile > > successfully, it will abort once execute it. > > With what error message? > > > The below is the c++ source code: > > > > if (PyImport_AppendInittab("embedded_hello", initembedded_hello) == > > -1) > > Please always provide a complete program. > > > main_namespace["AppBase"] = python::object(python::ptr(pBase)); > > //abort here > > > > python::handle<> result( > > PyRun_String( > > "from embedded_hello import * \n" > > "print AppBase.i \n" > > Py_file_input, main_namespace.ptr(), main_namespace.ptr()) > > ); > > I've tried to reproduce this, and got: > > > > that works fine. However, when I move the > > main_namespace["context"] = ptr(b); > > above call to PyRun_String (as is done in your example), > > > > I get this: > > TypeError: No Python class registered for C++ class B > > which looks quite reasonable. Python has no idea that class 'B' even exists, > but you're trying to add instance of that class. I wonder if it's possible > to automatically register Python class in this case, though. > > - Volodya
|
https://mail.python.org/pipermail/cplusplus-sig/2005-April/008535.html
|
CC-MAIN-2018-26
|
refinedweb
| 227
| 65.12
|
I want to sort and array of instances of a struct via qsort, but I'm really helpless yet... :-/ For two days I was trying to follow some tutorials, nevertheless I wasn't successful
I would really appritiate, if someone would be so nice and would look at my code and show me, how it shall be coded to work... I want simply sort the personArr by the property name.
PS:I can use only those includes
#include <iostream> #include <iomanip> #include <string> #include <cstring> #include <cstdlib> #include <cstdio> using namespace std; class CPhoneBook { public: struct item { string name; string address; string phone; } ITEM; int k; item* personArr; CPhoneBook (void); ~CPhoneBook (void); bool Add ( const string & name, const string & address, const string & phone ) { item i ; i.name=name; i.address=address; i.phone=phone; personArr[k++]=i; sort(); return true; }; void sort() { //here I want to sort personArr } bool Del ( const string & name, const string & address ) { return true; }; bool Search ( const string & name, const string & address, string & phone ) const { return true; }; }; CPhoneBook::CPhoneBook () { personArr = new item[1000]; k=0; } CPhoneBook::~CPhoneBook () { delete []personArr; } int main () { CPhoneBook rect; rect.Add("a1","z2","z3"); rect.Add("x1","z2","z3"); rect.Add("0y","2","3"); rect.Add("z1","z2","z3"); }
The faster, the better....the deadline is close...thanks for any help..
|
http://www.dreamincode.net/forums/topic/272322-qsort-in-struc-help-needed-asap-s/page__p__1584449
|
CC-MAIN-2016-07
|
refinedweb
| 219
| 56.29
|
Problem Statement:
In a forest near Bandipur in INDIA, there are some bamboo trees .The length of each tree get doubled during winter and increases by one unit in summer , write a Java program to calculate the total length of n number of bamboo trees in M number of seasons.The season always starts with winter.
--- Update ---
import java.util.Scanner;
public class Tree {
public static void main(String args[]) {
int length;
int season;
int su, wi;
int sum = 0;
int n = 0;
int TotalLength=0;
System.out.println("enter the number of seasons");
Scanner in = new Scanner(System.in);
season = in.nextInt();
System.out.println("enter the length of tree");
length = in.nextInt();
System.out.println("enter the total number of trees in forest");
n = in.nextInt();
for (int i = 0; i < (season); i++) {
if ((i % 2) == 0) {
length = length * 2;
}
if ((i % 2) != 0) {
length = length + 1;
}
}
TotalLength = length*n;
System.out.println("the total trees lenght is: " + TotalLength);
}
}
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/35906-season-tree.html
|
CC-MAIN-2017-39
|
refinedweb
| 161
| 59.8
|
Angular TypeScript Vs ES6 Vs ES5
Typescript is an open-source programming language and it is a superset of ES6. So we can say if you write ES6 code, the same code is valid and compilable using TypeScript transcompiler. The relationship between TypeScript, ES6 and ES5 is shown in the below diagram.
What is ES5?
ES5 is a short form for “ECMAScript 5” also called as “regular javascript”. ES5 is a normal Javascript that we all know and almost every browser support this.
What is ES6 ?
ES6 is the next version of Javascript and only very few browsers supports this version of ECMAScript completely and even TypeScript is supported very less. To solve this issue we have TypeScript transcompiler which will convert TypeScript code to ES5 code which will be supported/understandable by most of the browsers.
Major improvements of TypeScript over ES5
Types – Type checking helps when you are writing code as it prevents bugs during compile time and also it helps reader to understand the code well. Note, the types are optional in TypeScript.
For example you can provide the variable type along with its name as shown below
name: string; function hello (name: string): string{ return “Hello “ + name; }
See what happens if we change the above code like below
function hello (name: string): string{ return 10; }
If you try to compile the above code you will see the following error
Error TS2322: Type ’10’ is not assignable to type ‘string’
Classes – In JavaScript ES5 object-oriented programming was achieved through prototype-based objects and the model does not use classes. However, ES6 finally provides you the built-in classes in JavaScript.
To define classes you need to use “class” keyword like in Java and provide name for the class and body of the class details. Classes may contain properties, constructors and methods.
class Product { //properties //constructors //methods }
Decorators – These are design patterns that is used to provide a way to add annotations and define metadata to the classes, modules, methods, properties etc.,
For example in ’app.component.ts’, annotating the AppComponent class as Angular Component using @Component decorator and also defined metadata.
import { Component } from '@angular/core'; //Decorator - Metadata @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'app'; }
Imports – It is used to define dependencies for any module . For examples, in the following piece code from ‘app.module.ts’, “imports” is used to import BrowserModule which is required to create a browser app.
@NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Destructuring – It is nothing but breaking down a complex structure in to simple parts. Usually the complex structure is nothing but an object or an array. With destructuring syntax you could extract smaller parts from arrays and objects. Destructuring syntax can be used for variable declaration or variable assignment.
Further Learning
- Angular Application Flow and Bootstrapping
- Angular JS Project Setup and Hello World Application
- “Port 4200 is already in use” while running ng serve Angular CLI command
|
https://www.sneppets.com/angular/typescript-vs-es6-vs-es5/
|
CC-MAIN-2021-04
|
refinedweb
| 502
| 53.21
|
Homey can read NFC cards like your public transport card. This feature has a lot of potential applications. But for many of those applications having to physically touch your NFC card to your Homey can make implementing your NFC ideas a bit of a hassle.
Using Homeyduino you can now easily make your own NFC readers that work with Homey, removing the issue of having to be physically near Homey and allowing you to place NFC readers where ever you want.
For this project we’ve used:
This project used only two modules which are connected to each other: the NodeMCU board and the RFID-RC522 board. The following table and Fritzing diagram show how the two are connected.
This is what our prototype looks like:
Make sure you have the following programs installed:
#include <ESP8266WiFi.h> #include <WiFiClient.h> #include <Homey.h> #include <SPI.h> #include "MFRC522.h" /* PIN CONFIG */ #define MFRC522_RESET D3 #define MFRC522_SS D8 /* OBJECTS */ MFRC522 mfrc522(MFRC522_SS, MFRC522_RESET); /* GLOBAL VARIABLES */ unsigned long previousMillis = 0; const unsigned long interval = 100; //Interval in milliseconds String oldCardId = ""; uint8_t timeout = 0;(")"); } } } //Arduino functions void setup() { Serial.begin(115200); Homey.begin("RFID"); Homey.setClass("sensor"); SPI.begin(); mfrc522.PCD_Init(); } void loop() { wifi(); Homey.loop(); unsigned long currentMillis = millis(); if(currentMillis - previousMillis > interval) { previousMillis = currentMillis; if ( mfrc522.PICC_IsNewCardPresent() ) { mfrc522.PICC_ReadCardSerial(); String cardId = ""; for (uint8_t i = 0; i<mfrc522.uid.size; i++) { if (mfrc522.uid.uidByte[i]<0x10) cardId += "0"; cardId += String(mfrc522.uid.uidByte[i], HEX); } if (cardId!=oldCardId) { //Check if we didn't already see this card oldCardId = cardId; //Store the card id Serial.println(cardId); //Print the card id to the serial monitor Homey.trigger("card", cardId); //Send the card id to Homey timeout=50; //Allow the same card to be seen again after 5 seconds (50*100ms) } } else { if (timeout>0) { timeout--; if (timeout==0) oldCardId=""; //Remove previous card ID } } } }
Before flashing this sketch to your NodeMCU board make sure that you have set the board type to “NodeMCU 1.0”, otherwise the sketch will not compile due to missing definitions for the D3 and D8 pins.
After we made sure everything works as expected we made the project smaller by taking away the breadboard: we soldered the RFID-RC522 module to a Wemos D1 mini, giving us a small device that we could easily mount inside an enclosure.
The identification numer of a compatible RFID card will be sent to homey as a text string. By adding the “Action [text]” flowcard of the device to your flow and selecting the “card” trigger your flow will be triggered whenever a card is scanned. The autocomplete field will only show the “card” trigger once a card has been succesfully scanned at least once. If it is not shown (yet) you can always type in the name of the trigger manually.
|
https://arduino.tkkrlab.nl/les-6-domotica-met-de-esp266/homey/nfc/
|
CC-MAIN-2021-21
|
refinedweb
| 470
| 55.44
|
a:
Example
using System; namespace ConsoleApplication1 { class B { protected int i, j; // private to B, but accessible by D public void Set(int a, int b) { i = a; j = b; } public void Show() { Console.WriteLine(i + " " + j); } } class D : B { int k; // private // D can access B's i and j public void Setk() { k = i * j; } public void Showk() { Console.WriteLine(k); } } class ProtectedDemo { static void Main() { D ob = new D(); ob.Set(2, 3); // OK, known to D ob.Show(); // OK, known to D ob.Setk(); // OK, part of D ob.Showk(); // OK, part of D } } }
In this example, because B is inherited by D and because i and j are declared as protected in B, the Setk( ) method can access them. If i and j had been declared as private by B, then D would not have access to them, and the program would not compile.
Like public and private, protected status stays with a member no matter how many layers of inheritance are involved. Therefore, when a derived class is used as a base class for another derived class, any protected member of the initial base class that is inherited by the first derived class is also inherited as protected by a second derived class.
Although protected access is quite useful, it doesn’t apply in all situations. For example, in the case of TwoDShape shown in the preceding section, we specifically want the Width and Height values to be publicly accessible. It’s just that we want to manage the values they are assigned. Therefore, declaring them protected is not an option. In this case, the use of properties supplies the proper solution by controlling, rather than preventing, access. Remember, use protected when you want to create a member that is accessible throughout a class hierarchy, but otherwise private. To manage access to a value, use a property.
|
http://www.loopandbreak.com/protected-access/
|
CC-MAIN-2020-40
|
refinedweb
| 313
| 62.07
|
1.Problem Statement
After Xamarin was acquired by Microsoft, it became a natural expectations from the Xamarin developers and testers to have the scripting support available from inside the visual studio environment. This article will explain you about how to setup visual studio with required SDK and tools and how to write your first script and upload it on test cloud.
2.Before we start!
- Operating Systems – Windows 7 or OS X 11.0,12.0
- 1.6 GHz or faster processor
- 1 GB of RAM (1.5 GB if running on a virtual machine)
- 4 GB of available hard disk space
- 5400 RPM hard drive
- Download visual studio from here:
3.Getting ready
Download Visual studio community version and then follow the following steps.
- Double click the downloaded file and Click on the “Run” button.
- After Clicking on the run button, a pop up will prompt on the screen. Select the “Custom” option and click on the “Next” button.
- Once you click on the “Custom” button you will land to this page.
4.Complete visual studio Installation Instructions:
To complete the installation process we would require certain tools as listed below:
- Common Tools for Visual C++ 2015.
- Microsoft Sql Server Data Tools
- Microsoft Web Developer Tools.
- Windows 10 SDK(10.0.10586)
- Visual C++ Mobile Development
- Android Native Development Kit(R11C,64 Bits)[3rd Party]
- Apache Ant(1.9.3)[3rd Party]
In order to complete the installation of visual studio, please follow the steps mentioned below:
5.Steps:
- To install, expand Programming Languages and select “Visual C++”. Under Visual C++ select “Common Tools for Visual C++ 2015” and click on the “Next” button.
- To install “Microsoft SQL Server Data Tools” & “Microsoft Web Developer Tools” Expand “Windows and Web Development” tree and select “Microsoft SQL Server Data Tools” & “Microsoft Web Developer Tools” and Click on “Next”.
- To install “Windows 10 SDK (10.0.10586)” Expand the tree of “Universal Windows App Development Tools” and select “Windows 10 SDK(10.0.10586)” and Click on “Next”.
- To install “Visual C++ Mobile Development”, expand the tree for “Cross Platform Mobile Development” and select “Visual C++ Mobile Development” and then it will automatically select “Android Native Development Kit (R11C,64 bits) 3rd Party”, “Apache Ant(1.9.3)3rdParty” to complete the installation of these tools click on “Next”.
- Click on “Install” to complete the Process.
6.Create a Test Project
1.Open Visual studio.
2.Click on “File” and add a new “Project”.
3.Select “Test” from the left panel and from the right pane select “UI Test App(Xamarin.UITEST|Android)”.
4.Enter the name of the project .
5.In the SETUP section enter the path of the apk file.
7.Create Test Script (for example, script for Login)
using System; using System.IO; using System.Linq; using NUnit.Framework; using Xamarin.UITest; using Xamarin.UITest.Queries; using Xamarin.UITest.Android; namespace AppLaunch { [TestFixture] public class Tests { AndroidApp app; [SetUp] public void BeforeEachTest() { // TODO: If the Android app being tested is included in the solution then open // the Unit Tests window, right click Test Apps, select Add App Project // and select the app projects that should be tested. app = ConfigureApp.Androi.ApkFile(@"PASTE YOUR APK FILE PATH HERE").StartApp(); //THis will launch the application. } [Test] public void AppLaunches() { // app.Repl(); //It allows a user to interact with the object hierarchy. app.EnterText(x => x.Marked("input_userName"), ""); //It will enter the username in the username field. app.EnterText(x => x.Marked("input_password"), "Rajit"); //it will enter the password in the password field. app.Tap(x => x.Marked("loginBtn")); //Tap on the login field. } } }
8.Build Test Project in the Release Mode
a.Select the mode as “Release”.
b.Click on Build from the main toolbar and select “Build Solution”.
9.Upload the Script on Test Cloud
a.Sign Up at :
b.Click on “New Test Run” on the header bar
c.Select the platform(Android/IOS) and click on “Next”.
d.Select the specification (OS version, Manufacturer, Memory, Device family, Processor) from the left panel, select the device from the centre and Click on the Select x device button in the bottom.
e.You can run your script in series if you want to maintain otherwise keep the master series selected and click on next.
e.On this final step you can select following things:
- Select the language of the script (C# or Ruby)
- Platform of your system where the script is written
- And copy the generated command
packages\Xamarin.UITest.[version]\tools\test-cloud.exe submit yourAppFile.apk ff2d3a610f6b0e01460446fbf22a48e1 –devices bbb58a62 –series “master” –locale “en_US” –user user.name@walkingtree.tech –assembly-dir pathToTestDllFolder
Note: At the end paste the folder path where the Dll files are placed.
The complete command should look like this:
test-cloud.exe submit com.walkingtree.Sales360.Droid-Signed.apk 8889f97cfeeb047204a5fb0df65f321d –devices bbb58a62 –series “master” –locale “en_US” –app-name “Sales 360” –user user.name@walkingtree.in –assembly-dir “D:\Src\Sales360\Sales360\Sales360.Droid\AppLaunch\AppLaunch\bin\Release”
- Click on Done.
10.Steps to follow in your local system to upload script:
a.Open CMD in your local machine.
b.Type Cd and press “Enter”.
c.Type “d:” and press “Enter”(Reach out the drive where your script is placed in your system).
d.Reach the testcloud.exe file inside packages folder.
For me it was:
D:\Src\Sales360\Sales360\Sales360.Droid\AppLaunch\packages\Xamarin.UITest.1.0.0\tools
Copy that path and type cd D:\Src\Sales360\Sales360\Sales360.Droid\AppLaunch\packages\Xamarin.UITest.1.0.0\tools and press “Enter”
e.Now enter the command we copied from test cloud and press “Enter”:
test-cloud.exe submit com.walkingtree.Sales360.Droid-Signed.apk 8889f97cfeeb047204a5fb0df65f321d –devices bbb58a62 –series “master” –locale “en_US” –app-name “Sales 360” –user vaishali.tailor@walkingtree.in –assembly-dir “D:\Src\Sales360\Sales360\Sales360.Droid\AppLaunch\AppLaunch\bin\Release”
Once it is done the command prompt will look like this:
11.Summary
In this article, I explained how the visual studio can be configured to setup the Xamarin Test Cloud Environment on a given machine. Often the test automation falls under QA team and they do face challenges with the initial setup of the Xamarin Test Cloud environment. I hope this blog will help you in getting started with Xamarin Test Cloud without much fuss.
At WalkingTree we encourage our customers to have the test automation in place. Xamarin Test Cloud has been a big savior for us and we do recommend our customers to utilize this.
References:
-
-
|
https://wtcindia.wordpress.com/2017/02/25/setting-up-visual-studio-for-xamarin-test-cloud/
|
CC-MAIN-2018-05
|
refinedweb
| 1,087
| 60.11
|
One more thing I'm going to need is an LCD screen to enable the user to set the date/time.For all the more it'll be used (probably once in 10 years if the coin cell last as long as they say they do), I won't even bother with the back light.
And data sheets I've looked at are showing Vcc for LCDs to only be 1-2 mA, so I can power that from a digital pin, too. Would a 150 ohm resistor be good between the I/O pin and the Vcc on the LCD?
What about the 6 I/O pins controlling the LCD? I know they're a kind of communication lines, but I just use a liquid crystal library to control them. I'll likely be losing some current through them? After the user is done setting the time, I could disconnect the LCD, only I don't see a LCD.end() function only the LCD.begin(). Will the be any power consumption from this connection?
You need to drive all the pins connected to the LCD low before turning off its power, then they won't draw any current.
What will I need to do to turn them low, just digitalWrite(pin, LOW)?
Quote from: SouthernAtHeart on Mar 27, 2013, 06:53 pmWhat will I need to do to turn them low, just digitalWrite(pin, LOW)?Yes. BTW there is a problem with the LiquidCrystal library: it makes a call to the begin() function in the LiquidCrystal constructor (a very silly thing to do IMO, and completely unnecessary because you should call begin() in setup). So you should preferably patch the LiquidCrystal library source file to remove this call, to avoid feeding power through the I/O connections before you power up the LCD.
Yes. BTW there is a problem with the LiquidCrystal library: it makes a call to the begin() function in the LiquidCrystal constructor
avoid feeding power through the I/O connections before you power up the LCD
I reckon I should use the backlight for my LCD, then. I searched and found this schematic that turns is on and off. I cuts the NEG side, but I reckon the POS side isn't connected to the LCD's POS, so it won't back feed into it when I power off the LCD's chip. Not sure about the value of R7. Running from 4.5v, I may not need a very high resistor?
One thing I don't know/understand. When I power down this circuit by a LOW arduino pin, there won't be any current left flowing in the BSS1338 mosfet, then will there? See the attached clip from the BSS1338 data sheet. Does that mean there'll be .5 uA current loss all the time?
I reckon the one controlling my H-bridge will add another .5uA.
Running from 9V, you definitely need R1, R2 to be 2 x 1K. However, your schematic is wrong. They need to be in series, not in parallel, and the gate drive to the P-channel mosfet needs to be taken from the junction of the two.I would connect the 100nF capacitor to the source terminal of the P-channel mosfet instead of the drain terminal, and put a much larger electrolytic capacitor (e.g. 1000uF) in parallel with it. Position the P-channel mosfet and capacitors close to the 9986, and keep the traces connecting these 4 components short.
#include <LiquidCrystal.h>// initialize the library with the numbers of the interface pinsLiquidCrystal lcd(A7,A0,A2,A3,A4,A5);int LCDpins[] = { A7, A0, A2, A3, A4, A5};int LCDpower = 4; //pin controlling LCD powerunsigned long LCDtimer; //countervoid setup() { //don't use LCD.begin() here, correct? pinMode(LCDpower, OUTPUT); //set pin as output digitalWrite(LCDpower, LOW); //maybe not needed? }void loop() { if (millis() - LCDtimer > 10000) { //every 10 seconds LCDtimer = millis(); //update counter display_something(); //activate the LCD } //do other stuff... delay(100);}void display_something() { digitalWrite(LCDpower, HIGH); //power up the LCD lcd.begin(16, 2); //start the LCD lcd.clear(); //clear the screen lcd.print(millis()); //print the time delay(2000); //give user time to read it for (int i = 0; i < 6; i++) { digitalWrite(LCDpins[i], LOW); } digitalWrite(LCDpower, LOW); //power down the LCD}
The power requirements depend a little on how much of the display is lit but on average the display uses about 20mA from the 3.3V supply
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=142597.msg1174702
|
CC-MAIN-2015-14
|
refinedweb
| 780
| 71.85
|
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Explore Your Dataset With Pandas
Do tutorial,, you can get all of the example code in this tutorial at the link below:
Get Jupyter Notebook: Click here to get the Jupyter Notebook you’ll use to explore data with Pandas in this tutorial.
Setting Up Your Environment
There are a few things you’ll need to get started with this tutorial. First is a familiarity with Python’s built-in data structures, especially lists and dictionaries. For more information, check out Lists and Tuples in Python and Dictionaries in Python.
The second thing you’ll need is a working Python environment. You can follow along in any terminal that has Python 3 installed. If you want to see nicer output, especially for the large NBA dataset you’ll be working with, then you might want to run the examples in a Jupyter notebook.
Note: If you don’t have Python installed at all, then check out Python 3 Installation & Setup Guide. You can also follow along online in a try-out Jupyter notebook.
The last thing you’ll need is Pandas and other Python libraries, which you can install with pip:
$ python3 -m pip install requests pandas matplotlib
You can also use the Conda package manager:
$ conda install requests pandas matplotlib
If you’re using the Anaconda distribution, then you’re good to go! Anaconda already comes with the Pandas Python library installed.
Note: Have you heard that there are multiple package managers in the Python world and are somewhat confused about which one to pick?
pip and
conda are both excellent choices, and they each have their advantages.
If you’re going to use Python mainly for data science work, then
conda is perhaps the better choice. In the
conda ecosystem, you have two main alternatives:
- If you want to get a stable data science environment up and running quickly, and you don’t mind downloading 500 MB of data, then check out the Anaconda distribution.
- If you prefer a more minimalist setup, then check out the section on installing Miniconda in Setting Up Python for Machine Learning on Windows.
The examples in this tutorial have been tested with Python 3.7 and Pandas 0.25.0, but they should also work in older versions. You can get all the code examples you’ll see in this tutorial in a Jupyter notebook by clicking the link below:
Get Jupyter Notebook: Click here to get the Jupyter Notebook you’ll use to explore data with Pandas in this tutorial.
Let’s get started!
Using the Pandas Python Library
Now that you’ve installed Pandas, it’s time to have a look at a dataset. In this tutorial, you’ll analyze NBA results provided by FiveThirtyEight in a 17MB CSV file. Create a script
download_nba_all_elo.py to download the data:
import requests download_url = "" target_csv_path = "nba_all_elo.csv" response = requests.get(download_url) response.raise_for_status() # Check that the request was successful with open(target_csv_path, "wb") as f: f.write(response.content) print("Download ready.")
When you execute the script, it will save the file
nba_all_elo.csv in your current working directory.
Note: You could also use your web browser to download the CSV file.
However, having a download script has several advantages:
- You can tell where you got your data.
- You can repeat the download anytime! That’s especially handy if the data is often refreshed.
- You don’t need to share the 17MB CSV file with your co-workers. Usually, it’s enough to share the download script.
Now you can use the Pandas Python library to take a look at your data:
>>> import pandas as pd >>> nba = pd.read_csv("nba_all_elo.csv") >>> type(nba) <class 'pandas.core.frame.DataFrame'>
Here, you follow the convention of importing Pandas in Python with the
pd alias. Then, you use
.read_csv() to read in your dataset and store it as a
DataFrame object in the variable
nba.
Note: Is your data not in CSV format? No worries! The Pandas Python library provides several similar functions like
read_json(),
read_html(), and
read_sql_table(). To learn how to work with these file formats, check out Reading and Writing Files With Pandas or consult the docs.
You can see how much data
nba contains:
>>> len(nba) 126314 >>> nba.shape (126314, 23)
You use the Python built-in function
len() to determine the number of rows. You also use the
.shape attribute of the
DataFrame to see its dimensionality. The result is a tuple containing the number of rows and columns.
Now you know that there are 126,314 rows and 23 columns in your dataset. But how can you be sure the dataset really contains basketball stats? You can have a look at the first five rows with
.head():
>>> nba.head()
If you’re following along with a Jupyter notebook, then you’ll see a result like this:
Unless your screen is quite large, your output probably won’t display all 23 columns. Somewhere in the middle, you’ll see a column of ellipses (
...) indicating the missing data. If you’re working in a terminal, then that’s probably more readable than wrapping long rows. However, Jupyter notebooks will allow you to scroll. You can configure Pandas to display all 23 columns like this:
>>> pd.set_option("display.max.columns", None)
While it’s practical to see all the columns, you probably won’t need six decimal places! Change it to two:
>>> pd.set_option("display.precision", 2)
To verify that you’ve changed the options successfully, you can execute
.head() again, or you can display the last five rows with
.tail() instead:
>>> nba.tail()
Now, you should see all the columns, and your data should show two decimal places:
You can discover some further possibilities of
.head() and
.tail() with a small exercise. Can you print the last three lines of your
DataFrame? Expand the code block below to see the solution:
Similar to the Python standard library, functions in Pandas also come with several optional parameters. Whenever you bump into an example that looks relevant but is slightly different from your use case, check out the official documentation. The chances are good that you’ll find a solution by tweaking some optional parameters!
Getting to Know Your Data
You’ve imported a CSV file with the Pandas Python library and had a first look at the contents of your dataset. So far, you’ve only seen the size of your dataset and its first and last few rows. Next, you’ll learn how to examine your data more systematically.
Displaying Data Types
The first step in getting to know your data is to discover the different data types it contains. While you can put anything into a list, the columns of a
DataFrame contain values of a specific data type. When you compare Pandas and Python data structures, you’ll see that this behavior makes Pandas much faster!
You can display all columns and their data types with
.info():
>>> nba.info()
This will produce the following output:
You’ll see a list of all the columns in your dataset and the type of data each column contains. Here, you can see the data types
int64,
float64, and
object. Pandas uses the NumPy library to work with these types. Later, you’ll meet the more complex
categorical data type, which the Pandas Python library implements itself.
The
object data type is a special one. According to the Pandas Cookbook, the
object data type is “a catch-all for columns that Pandas doesn’t recognize as any other specific type.” In practice, it often means that all of the values in the column are strings.
Although you can store arbitrary Python objects in the
object data type, you should be aware of the drawbacks to doing so. Strange values in an
object column can harm Pandas’ performance and its interoperability with other libraries. For more information, check out the official getting started guide.
Showing Basics Statistics
Now that you’ve seen what data types are in your dataset, it’s time to get an overview of the values each column contains. You can do this with
.describe():
>>> nba.describe()
This function shows you some basic descriptive statistics for all numeric columns:
.describe() only analyzes numeric columns by default, but you can provide other data types if you use the
include parameter:
>>> import numpy as np >>> nba.describe(include=object)
.describe() won’t try to calculate a mean or a standard deviation for the
object columns, since they mostly include text strings. However, it will still display some descriptive statistics:
Take a look at the
team_id and
fran_id columns. Your dataset contains 104 different team IDs, but only 53 different franchise IDs. Furthermore, the most frequent team ID is
BOS, but the most frequent franchise ID
Lakers. How is that possible? You’ll need to explore your dataset a bit more to answer this question.
Exploring Your Dataset
Exploratory data analysis can help you answer questions about your dataset. For example, you can examine how often specific values occur in a column:
>>> nba["team_id"].value_counts() BOS 5997 NYK 5769 LAL 5078 ... SDS 11 >>> nba["fran_id"].value_counts() Name: team_id, Length: 104, dtype: int64 Lakers 6024 Celtics 5997 Knicks 5769 ... Huskies 60 Name: fran_id, dtype: int64
It seems that a team named
"Lakers" played 6024 games, but only 5078 of those were played by the Los Angeles Lakers. Find out who the other
"Lakers" team is:
>>> nba.loc[nba["fran_id"] == "Lakers", "team_id"].value_counts() LAL 5078 MNL 946 Name: team_id, dtype: int64
Indeed, the Minneapolis Lakers (
"MNL") played 946 games. You can even find out when they played those games. For that, you’ll first define a column that converts the value of
date_game to the
datetime data type. Then you can use the
min and
max aggregate functions, to find the first and last games of Minneapolis Lakers:
>>> nba["date_played"] = pd.to_datetime(nba["date_game"]) >>> nba.loc[nba["team_id"] == "MNL", "date_played"].min() Timestamp('1948-11-04 00:00:00') >>> nba.loc[nba['team_id'] == 'MNL', 'date_played'].max() Timestamp('1960-03-26 00:00:00') >>> nba.loc[nba["team_id"] == "MNL", "date_played"].agg(("min", "max")) min 1948-11-04 max 1960-03-26 Name: date_played, dtype: datetime64[ns]
It looks like the Minneapolis Lakers played between the years of 1948 and 1960. That explains why you might not recognize this team!
You’ve also found out why the Boston Celtics team
"BOS" played the most games in the dataset. Let’s analyze their history also a little bit. Find out how many points the Boston Celtics have scored during all matches contained in this dataset. Expand the code block below for the solution:
Similar to the
.min() and
.max() aggregate functions, you can also use
.sum():
>>> nba.loc[nba["team_id"] == "BOS", "pts"].sum() 626484
The Boston Celtics scored a total of 626,484 points.
You’ve got a taste for the capabilities of a Pandas
DataFrame. In the following sections, you’ll expand on the techniques you’ve just used, but first, you’ll zoom in and learn how this powerful data structure works.
Getting to Know Pandas’ Data Structures
While a
DataFrame provides functions that can feel quite intuitive, the underlying concepts are a bit trickier to understand. For this reason, you’ll set aside the vast NBA
DataFrame and build some smaller Pandas objects from scratch.
Understanding Series Objects
Python’s most basic data structure is the list, which is also a good starting point for getting to know
pandas.Series objects. Create a new
Series object based on a list:
>>> revenues = pd.Series([5555, 7000, 1980]) >>> revenues 0 5555 1 7000 2 1980 dtype: int64
You’ve used the list
[5555, 7000, 1980] to create a
Series object called
revenues. A
Series object wraps two components:
- A sequence of values
- A sequence of identifiers, which is the index
You can access these components with
.values and
.index, respectively:
>>> revenues.values array([5555, 7000, 1980]) >>> revenues.index RangeIndex(start=0, stop=3, step=1)
revenues.values returns the values in the
Series, whereas
revenues.index returns the positional index.
Note: If you’re familiar with NumPy, then it might be interesting for you to note that the values of a
Series object are actually n-dimensional arrays:
>>> type(revenues.values) <class 'numpy.ndarray'>
If you’re not familiar with NumPy, then there’s no need to worry! You can explore the ins and outs of your dataset with the Pandas Python library alone. However, if you’re curious about what Pandas does behind the scenes, then check out Look Ma, No For-Loops: Array Programming With NumPy.
While Pandas builds on NumPy, a significant difference is in their indexing. Just like a NumPy array, a Pandas
Series also has an integer index that’s implicitly defined. This implicit index indicates the element’s position in the
Series.
However, a
Series can also have an arbitrary type of index. You can think of this explicit index as labels for a specific row:
>>> city_revenues = pd.Series( ... [4200, 8000, 6500], ... index=["Amsterdam", "Toronto", "Tokyo"] ... ) >>> city_revenues Amsterdam 4200 Toronto 8000 Tokyo 6500 dtype: int64
Here, the index is a list of city names represented by strings. You may have noticed that Python dictionaries use string indices as well, and this is a handy analogy to keep in mind! You can use the code blocks above to distinguish between two types of
Series:
revenues: This
Seriesbehaves like a Python list because it only has a positional index.
city_revenues: This
Seriesacts like a Python dictionary because it features both a positional and a label index.
Here’s how to construct a
Series with a label index from a Python dictionary:
>>> city_employee_count = pd.Series({"Amsterdam": 5, "Tokyo": 8}) >>> city_employee_count Amsterdam 5 Tokyo 8 dtype: int64
The dictionary keys become the index, and the dictionary values are the
Series values.
Just like dictionaries,
Series also support
.keys() and the
in keyword:
>>> city_employee_count.keys() Index(['Amsterdam', 'Tokyo'], dtype='object') >>> "Tokyo" in city_employee_count True >>> "New York" in city_employee_count False
You can use these methods to answer questions about your dataset quickly.
Understanding DataFrame Objects
While a
Series is a pretty powerful data structure, it has its limitations. For example, you can only store one attribute per key. As you’ve seen with the
nba dataset, which features 23 columns, the Pandas Python library has more to offer with its
DataFrame. This data structure is a sequence of
Series objects that share the same index.
If you’ve followed along with the
Series examples, then you should already have two
Series objects with cities as keys:
city_revenues
city_employee_count
You can combine these objects into a
DataFrame by providing a dictionary in the constructor. The dictionary keys will become the column names, and the values should contain the
Series objects:
>>> city_data = pd.DataFrame({ ... "revenue": city_revenues, ... "employee_count": city_employee_count ... }) >>> city_data revenue employee_count Amsterdam 4200 5.0 Tokyo 6500 8.0 Toronto 8000 NaN
Note how Pandas replaced the missing
employee_count value for Toronto with
NaN.
The new
DataFrame index is the union of the two
Series indices:
>>> city_data.index Index(['Amsterdam', 'Tokyo', 'Toronto'], dtype='object')
Just like a
Series, a
DataFrame also stores its values in a NumPy array:
>>> city_data.values array([[4.2e+03, 5.0e+00], [6.5e+03, 8.0e+00], [8.0e+03, nan]])
You can also refer to the 2 dimensions of a
DataFrame as axes:
>>> city_data.axes [Index(['Amsterdam', 'Tokyo', 'Toronto'], dtype='object'), Index(['revenue', 'employee_count'], dtype='object')] >>> city_data.axes[0] Index(['Amsterdam', 'Tokyo', 'Toronto'], dtype='object') >>> city_data.axes[1] Index(['revenue', 'employee_count'], dtype='object')
The axis marked with 0 is the row index, and the axis marked with 1 is the column index. This terminology is important to know because you’ll encounter several
DataFrame methods that accept an
axis parameter.
A
DataFrame is also a dictionary-like data structure, so it also supports
.keys() and the
in keyword. However, for a
DataFrame these don’t relate to the index, but to the columns:
>>> city_data.keys() Index(['revenue', 'employee_count'], dtype='object') >>> "Amsterdam" in city_data False >>> "revenue" in city_data True
You can see these concepts in action with the bigger NBA dataset. Does it contain a column called
"points", or was it called
"pts"? To answer this question, display the index and the axes of the
nba dataset, then expand the code block below for the solution:
Because you didn’t specify an index column when you read in the CSV file, Pandas has assigned a
RangeIndex to the
DataFrame:
>>> nba.index RangeIndex(start=0, stop=126314, step=1)
nba, like all
DataFrame objects, has two axes:
>>> nba.axes [RangeIndex(start=0, stop=126314, step=1), Index(['gameorder', 'game_id', 'lg_id', '_iscopy', 'year_id', 'date_game', 'seasongame', 'is_playoffs', 'team_id', 'fran_id', 'pts', 'elo_i', 'elo_n', 'win_equiv', 'opp_id', 'opp_fran', 'opp_pts', 'opp_elo_i', 'opp_elo_n', 'game_location', 'game_result', 'forecast', 'notes'], dtype='object')]
You can check the existence of a column with
.keys():
>>> "points" in nba.keys() False >>> "pts" in nba.keys() True
The column is called
"pts", not
"points".
As you use these methods to answer questions about your dataset, be sure to keep in mind whether you’re working with a
Series or a
DataFrame so that your interpretation is accurate.
Accessing Series Elements
In the section above, you’ve created a Pandas
Series based on a Python list and compared the two data structures. You’ve seen how a
Series object is similar to lists and dictionaries in several ways. A further similarity is that you can use the indexing operator (
[]) for
Series as well.
You’ll also learn how to use two Pandas-specific access methods:
.loc
.iloc
You’ll see that these data access methods can be much more readable than the indexing operator.
Using the Indexing Operator
Recall that a
Series has two indices:
- A positional or implicit index, which is always a
RangeIndex
- A label or explicit index, which can contain any hashable objects
Next, revisit the
city_revenues object:
>>> city_revenues Amsterdam 4200 Toronto 8000 Tokyo 6500 dtype: int64
You can conveniently access the values in a
Series with both the label and positional indices:
>>> city_revenues["Toronto"] 8000 >>> city_revenues[1] 8000
You can also use negative indices and slices, just like you would for a list:
>>> city_revenues[-1] 6500 >>> city_revenues[1:] Toronto 8000 Tokyo 6500 dtype: int64 >>> city_revenues["Toronto":] Toronto 8000 Tokyo 6500 dtype: int64
If you want to learn more about the possibilities of the indexing operator, then check out Lists and Tuples in Python.
Using
.loc and
.iloc
The indexing operator (
[]) is convenient, but there’s a caveat. What if the labels are also numbers? Say you have to work with a
Series object like this:
>>> colors = pd.Series( ... ["red", "purple", "blue", "green", "yellow"], ... index=[1, 2, 3, 5, 8] ... ) >>> colors 1 red 2 purple 3 blue 5 green 8 yellow dtype: object
What will
colors[1] return? For a positional index,
colors[1] is
"purple". However, if you go by the label index, then
colors[1] is referring to
"red".
The good news is, you don’t have to figure it out! Instead, to avoid confusion, the Pandas Python library provides two data access methods:
.locrefers to the label index.
.ilocrefers to the positional index.
These data access methods are much more readable:
>>> colors.loc[1] 'red' >>> colors.iloc[1] 'purple'
colors.loc[1] returned
"red", the element with the label
1.
colors.iloc[1] returned
"purple", the element with the index
1.
The following figure shows which elements
.loc and
.iloc refer to:
Again,
.loc points to the label index on the right-hand side of the image. Meanwhile,
.iloc points to the positional index on the left-hand side of the picture.
It’s easier to keep in mind the distinction between
.loc and
.iloc than it is to figure out what the indexing operator will return. Even if you’re familiar with all the quirks of the indexing operator, it can be dangerous to assume that everybody who reads your code has internalized those rules as well!
Note: In addition to being confusing for
Series with numeric labels, the Python indexing operator has some performance drawbacks. It’s perfectly okay to use it in interactive sessions for ad-hoc analysis, but for production code, the
.loc and
.iloc data access methods are preferable. For further details, check out the Pandas User Guide section on indexing and selecting data.
.loc and
.iloc also support the features you would expect from indexing operators, like slicing. However, these data access methods have an important difference. While
.iloc excludes the closing element,
.loc includes it. Take a look at this code block:
>>> # Return the elements with the implicit index: 1, 2 >>> colors.iloc[1:3] 2 purple 3 blue dtype: object
If you compare this code with the image above, then you can see that
colors.iloc[1:3] returns the elements with the positional indices of
1 and
2. The closing item
"green" with a positional index of
3 is excluded.
On the other hand,
.loc includes the closing element:
>>> # Return the elements with the explicit index between 3 and 8 >>> colors.loc[3:8] 3 blue 5 green 8 yellow dtype: object
This code block says to return all elements with a label index between
3 and
8. Here, the closing item
"yellow" has a label index of
8 and is included in the output.
You can also pass a negative positional index to
.iloc:
>>> colors.iloc[-2] 'green'
You start from the end of the
Series and return the second element.
Note: There used to be an
.ix indexer, which tried to guess whether it should apply positional or label indexing depending on the data type of the index. Because it caused a lot of confusion, it has been deprecated since Pandas version 0.20.0.
It’s highly recommended that you do not use
.ix for indexing. Instead, always use
.loc for label indexing and
.iloc for positional indexing. For further details, check out the Pandas User Guide.
You can use the code blocks above to distinguish between two
Series behaviors:
- You can use
.ilocon a
Seriessimilar to using
[]on a list.
- You can use
.locon a
Seriessimilar to using
[]on a dictionary.
Be sure to keep these distinctions in mind as you access elements of your
Series objects.
Accessing DataFrame Elements
Since a
DataFrame consists of
Series objects, you can use the very same tools to access its elements. The crucial difference is the additional dimension of the
DataFrame. You’ll use the indexing operator for the columns and the access methods
.loc and
.iloc on the rows.
Using the Indexing Operator
If you think of a
DataFrame as a dictionary whose values are
Series, then it makes sense that you can access its columns with the indexing operator:
>>> city_data["revenue"] Amsterdam 4200 Tokyo 6500 Toronto 8000 Name: revenue, dtype: int64 >>> type(city_data["revenue"]) pandas.core.series.Series
Here, you use the indexing operator to select the column labeled
"revenue".
If the column name is a string, then you can use attribute-style accessing with dot notation as well:
>>> city_data.revenue Amsterdam 4200 Tokyo 6500 Toronto 8000 Name: revenue, dtype: int64
city_data["revenue"] and
city_data.revenue return the same output.
There’s one situation where accessing
DataFrame elements with dot notation may not work or may lead to surprises. This is when a column name coincides with a
DataFrame attribute or method name:
>>> toys = pd.DataFrame([ ... {"name": "ball", "shape": "sphere"}, ... {"name": "Rubik's cube", "shape": "cube"} ... ]) >>> toys["shape"] 0 sphere 1 cube Name: shape, dtype: object >>> toys.shape (2, 2)
The indexing operation
toys["shape"] returns the correct data, but the attribute-style operation
toys.shape still returns the shape of the
DataFrame. You should only use attribute-style accessing in interactive sessions or for read operations. You shouldn’t use it for production code or for manipulating data (such as defining new columns).
Using
.loc and
.iloc
Similar to
Series, a
DataFrame also provides
.loc and
.iloc data access methods. Remember,
.loc uses the label and
.iloc the positional index:
>>> city_data.loc["Amsterdam"] revenue 4200.0 employee_count 5.0 Name: Amsterdam, dtype: float64 >>> city_data.loc["Tokyo": "Toronto"] revenue employee_count Tokyo 6500 8.0 Toronto 8000 NaN >>> city_data.iloc[1] revenue 6500.0 employee_count 8.0 Name: Tokyo, dtype: float64
Each line of code selects a different row from
city_data:
city_data.loc["Amsterdam"]selects the row with the label index
"Amsterdam".
city_data.loc["Tokyo": "Toronto"]selects the rows with label indices from
"Tokyo"to
"Toronto". Remember,
.locis inclusive.
city_data.iloc[1]selects the row with the positional index
1, which is
"Tokyo".
Alright, you’ve used
.loc and
.iloc on small data structures. Now, it’s time to practice with something bigger! Use a data access method to display the second-to-last row of the
nba dataset. Then, expand the code block below to see a solution:
The second-to-last row is the row with the positional index of
-2. You can display it with
.iloc:
>>> nba.iloc[-2] gameorder 63157 game_id 201506170CLE lg_id NBA _iscopy 0 year_id 2015 date_game 6/16/2015 seasongame 102 is_playoffs 1 team_id CLE fran_id Cavaliers pts 97 elo_i 1700.74 elo_n 1692.09 win_equiv 59.29 opp_id GSW opp_fran Warriors opp_pts 105 opp_elo_i 1813.63 opp_elo_n 1822.29 game_location H game_result L forecast 0.48 notes NaN date_played 2015-06-16 00:00:00 Name: 126312, dtype: object
You’ll see the output as a
Series object.
For a
DataFrame, the data access methods
.loc and
.iloc also accept a second parameter. While the first parameter selects rows based on the indices, the second parameter selects the columns. You can use these parameters together to select a subset of rows and columns from your
DataFrame:
>>> city_data.loc["Amsterdam": "Tokyo", "revenue"] Amsterdam 4200 Tokyo 6500 Name: revenue, dtype: int64
Note that you separate the parameters with a comma (
,). The first parameter,
"Amsterdam" : "Tokyo," says to select all rows between those two labels. The second parameter comes after the comma and says to select the
"revenue" column.
It’s time to see the same construct in action with the bigger
nba dataset. Select all games between the labels
5555 and
5559. You’re only interested in the names of the teams and the scores, so select those elements as well. Expand the code block below to see a solution:
First, define which rows you want to see, then list the relevant columns:
>>> nba.loc[5555:5559, ["fran_id", "opp_fran", "pts", "opp_pts"]]
You use
.loc for the label index and a comma (
,) to separate your two parameters.
You should see a small part of your quite huge dataset:
The output is much easier to read!
With data access methods like
.loc and
.iloc, you can select just the right subset of your
DataFrame to help you answer questions about your dataset.
Querying Your Dataset
You’ve seen how to access subsets of a huge dataset based on its indices. Now, you’ll select rows based on the values in your dataset’s columns to query your data. For example, you can create a new
DataFrame that contains only games played after 2010:
>>> current_decade = nba[nba["year_id"] > 2010] >>> current_decade.shape (12658, 24)
You now have 24 columns, but your new
DataFrame only consists of rows where the value in the
"year_id" column is greater than
2010.
You can also select the rows where a specific field is not null:
>>> games_with_notes = nba[nba["notes"].notnull()] >>> games_with_notes.shape (5424, 24)
This can be helpful if you want to avoid any missing values in a column. You can also use
.notna() to achieve the same goal.
You can even access values of the
object data type as
str and perform string methods on them:
>>> ers = nba[nba["fran_id"].str.endswith("ers")] >>> ers.shape (27797, 24)
You use
.str.endswith() to filter your dataset and find all games where the home team’s name ends with
"ers".
You can combine multiple criteria and query your dataset as well. To do this, be sure to put each one in parentheses and use the logical operators
| and
& to separate them.
Note: The operators
and,
or,
&&, and
|| won’t work here. If you’re curious as to why, then check out the section on how the Pandas Python library uses Boolean operators in Python Pandas: Tricks & Features You May Not Know.
Do a search for Baltimore games where both teams scored over 100 points. In order to see each game only once, you’ll need to exclude duplicates:
>>> nba[ ... (nba["_iscopy"] == 0) & ... (nba["pts"] > 100) & ... (nba["opp_pts"] > 100) & ... (nba["team_id"] == "BLB") ... ]
Here, you use
nba["_iscopy"] == 0 to include only the entries that aren’t copies.
Your output should contain five eventful games:
Try to build another query with multiple criteria. In the spring of 1992, both teams from Los Angeles had to play a home game at another court. Query your dataset to find those two games. Both teams have an ID starting with
"LA". Expand the code block below to see a solution:
You can use
.str to find the team IDs that start with
"LA", and you can assume that such an unusual game would have some notes:
>>> nba[ ... (nba["_iscopy"] == 0) & ... (nba["team_id"].str.startswith("LA")) & ... (nba["year_id"]==1992) & ... (nba["notes"].notnull()) ... ]
Your output should show two games on the day 5/3/1992:
Nice find!
When you know how to query your dataset with multiple criteria, you’ll be able to answer more specific questions about your dataset.
Grouping and Aggregating Your Data
You may also want to learn other features of your dataset, like the sum, mean, or average value of a group of elements. Luckily, the Pandas Python library offers grouping and aggregation functions to help you accomplish this task.
A
Series has more than twenty different methods for calculating descriptive statistics. Here are some examples:
>>> city_revenues.sum() 18700 >>> city_revenues.max() 8000
The first method returns the total of
city_revenues, while the second returns the max value. There are other methods you can use, like
.min() and
.mean().
Remember, a column of a
DataFrame is actually a
Series object. For this reason, you can use these same functions on the columns of
nba:
>>> points = nba["pts"] >>> type(points) <class 'pandas.core.series.Series'> >>> points.sum() 12976235
A
DataFrame can have multiple columns, which introduces new possibilities for aggregations, like grouping:
>>> nba.groupby("fran_id", sort=False)["pts"].sum() fran_id Huskies 3995 Knicks 582497 Stags 20398 Falcons 3797 Capitols 22387 ...
By default, Pandas sorts the group keys during the call to
.groupby(). If you don’t want to sort, then pass
sort=False. This parameter can lead to performance gains.
You can also group by multiple columns:
>>> nba[ ... (nba["fran_id"] == "Spurs") & ... (nba["year_id"] > 2010) ... ].groupby(["year_id", "game_result"])["game_id"].count() year_id game_result 2011 L 25 W 63 2012 L 20 W 60 2013 L 30 W 73 2014 L 27 W 78 2015 L 31 W 58 Name: game_id, dtype: int64
You can practice these basics with an exercise. Take a look at the Golden State Warriors’ 2014-15 season (
year_id: 2015). How many wins and losses did they score during the regular season and the playoffs? Expand the code block below for the solution:
First, you can group by the
"is_playoffs" field, then by the result:
>>> nba[ ... (nba["fran_id"] == "Warriors") & ... (nba["year_id"] == 2015) ... ].groupby(["is_playoffs", "game_result"])["game_id"].count() is_playoffs game_result 0 L 15 W 67 1 L 5 W 16
is_playoffs=0 shows the results for the regular season, and
is_playoffs=1 shows the results for the playoffs.
In the examples above, you’ve only scratched the surface of the aggregation functions that are available to you in the Pandas Python library. To see more examples of how to use them, check out Pandas GroupBy: Your Guide to Grouping Data in Python.
Manipulating Columns
You’ll need to know how to manipulate your dataset’s columns in different phases of the data analysis process. You can add and drop columns as part of the initial data cleaning phase, or later based on the insights of your analysis.
Create a copy of your original
DataFrame to work with:
>>> df = nba.copy() >>> df.shape (126314, 24)
You can define new columns based on the existing ones:
>>> df["difference"] = df.pts - df.opp_pts >>> df.shape (126314, 25)
Here, you used the
"pts" and
"opp_pts" columns to create a new one called
"difference". This new column has the same functions as the old ones:
>>> df["difference"].max() 68
Here, you used an aggregation function
.max() to find the largest value of your new column.
You can also rename the columns of your dataset. It seems that
"game_result" and
"game_location" are too verbose, so go ahead and rename them now:
>>> renamed_df = df.rename( ... columns={"game_result": "result", "game_location": "location"} ... ) >>> renamed_df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 126314 entries, 0 to 126313 Data columns (total 25 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 gameorder 126314 non-null int64 ... 19 location 126314 non-null object 20 result 126314 non-null object 21 forecast 126314 non-null float64 22 notes 5424 non-null object 23 date_played 126314 non-null datetime64[ns] 24 difference 126314 non-null int64 dtypes: datetime64[ns](1), float64(6), int64(8), object(10) memory usage: 24.1+ MB
Note that there’s a new object,
renamed_df. Like several other data manipulation methods,
.rename() returns a new
DataFrame by default. If you want to manipulate the original
DataFrame directly, then
.rename() also provides an
inplace parameter that you can set to
True.
Your dataset might contain columns that you don’t need. For example, Elo ratings may be a fascinating concept to some, but you won’t analyze them in this tutorial. You can delete the four columns related to Elo:
>>> df.shape (126314, 25) >>> elo_columns = ["elo_i", "elo_n", "opp_elo_i", "opp_elo_n"] >>> df.drop(elo_columns, inplace=True, axis=1) >>> df.shape (126314, 21)
Remember, you added the new column
"difference" in a previous example, bringing the total number of columns to 25. When you remove the four Elo columns, the total number of columns drops to 21.
Specifying Data Types
When you create a new
DataFrame, either by calling a constructor or reading a CSV file, Pandas assigns a data type to each column based on its values. While it does a pretty good job, it’s not perfect. If you choose the right data type for your columns upfront, then you can significantly improve your code’s performance.
Take another look at the columns of the
nba dataset:
>>> df.info()
You’ll see the same output as before:
Ten of your columns have the data type
object. Most of these
object columns contain arbitrary text, but there are also some candidates for data type conversion. For example, take a look at the
date_game column:
>>> df["date_game"] = pd.to_datetime(df["date_game"])
Here, you use
.to_datetime() to specify all game dates as
datetime objects.
Other columns contain text that are a bit more structured. The
game_location column can have only three different values:
>>> df["game_location"].nunique() 3 >>> df["game_location"].value_counts() A 63138 H 63138 N 38 Name: game_location, dtype: int64
Which data type would you use in a relational database for such a column? You would probably not use a
varchar type, but rather an
enum. Pandas provides the
categorical data type for the same purpose:
>>> df["game_location"] = pd.Categorical(df["game_location"]) >>> df["game_location"].dtype CategoricalDtype(categories=['A', 'H', 'N'], ordered=False)
categorical data has a few advantages over unstructured text. When you specify the
categorical data type, you make validation easier and save a ton of memory, as Pandas will only use the unique values internally. The higher the ratio of total values to unique values, the more space savings you’ll get.
Run
df.info() again. You should see that changing the
game_location data type from
object to
categorical has decreased the memory usage.
Note: The
categorical data type also gives you access to additional methods through the
.cat accessor. To learn more, check out the official docs.
You’ll often encounter datasets with too many text columns. An essential skill for data scientists to have is the ability to spot which columns they can convert to a more performant data type.
Take a moment to practice this now. Find another column in the
nba dataset that has a generic data type and convert it to a more specific one. You can expand the code block below to see one potential solution:
game_result can take only two different values:
>>> df["game_result"].nunique() 2 >>> df["game_result"].value_counts() L 63157 W 63157
To improve performance, you can convert it into a
categorical column:
>>> df["game_result"] = pd.Categorical(df["game_result"])
You can use
df.info() to check the memory usage.
As you work with more massive datasets, memory savings becomes especially crucial. Be sure to keep performance in mind as you continue to explore your datasets.
Cleaning Data
You may be surprised to find this section so late in the tutorial! Usually, you’d take a critical look at your dataset to fix any issues before you move on to a more sophisticated analysis. However, in this tutorial, you’ll rely on the techniques that you’ve learned in the previous sections to clean your dataset.
Missing Values
Have you ever wondered why
.info() shows how many non-null values a column contains? The reason why is that this is vital information. Null values often indicate a problem in the data-gathering process. They can make several analysis techniques, like different types of machine learning, difficult or even impossible.
When you inspect the
nba dataset with
nba.info(), you’ll see that it’s quite neat. Only the column
notes contains null values for the majority of its rows:
This output shows that the
notes column has only 5424 non-null values. That means that over 120,000 rows of your dataset have null values in this column.
Sometimes, the easiest way to deal with records containing missing values is to ignore them. You can remove all the rows with missing values using
.dropna():
>>> rows_without_missing_data = nba.dropna() >>> rows_without_missing_data.shape (5424, 24)
Of course, this kind of data cleanup doesn’t make sense for your
nba dataset, because it’s not a problem for a game to lack notes. But if your dataset contains a million valid records and a hundred where relevant data is missing, then dropping the incomplete records can be a reasonable solution.
You can also drop problematic columns if they’re not relevant for your analysis. To do this, use
.dropna() again and provide the
axis=1 parameter:
>>> data_without_missing_columns = nba.dropna(axis=1) >>> data_without_missing_columns.shape (126314, 23)
Now, the resulting
DataFrame contains all 126,314 games, but not the sometimes empty
notes column.
If there’s a meaningful default value for your use case, then you can also replace the missing values with that:
>>> data_with_default_notes = nba.copy() >>> data_with_default_notes["notes"].fillna( ... value="no notes at all", ... inplace=True ... ) >>> data_with_default_notes["notes"].describe() count 126314 unique 232 top no notes at all freq 120890 Name: notes, dtype: object
Here, you fill the empty
notes rows with the string
"no notes at all".
Invalid Values
Invalid values can be even more dangerous than missing values. Often, you can perform your data analysis as expected, but the results you get are peculiar. This is especially important if your dataset is enormous or used manual entry. Invalid values are often more challenging to detect, but you can implement some sanity checks with queries and aggregations.
One thing you can do is validate the ranges of your data. For this,
.describe() is quite handy. Recall that it returns the following output:
The
year_id varies between 1947 and 2015. That sounds plausible.
What about
pts? How can the minimum be
0? Let’s have a look at those games:
>>> nba[nba["pts"] == 0]
This query returns a single row:
It seems the game was forfeited. Depending on your analysis, you may want to remove it from the dataset.
Inconsistent Values
Sometimes a value would be entirely realistic in and of itself, but it doesn’t fit with the values in the other columns. You can define some query criteria that are mutually exclusive and verify that these don’t occur together.
In the NBA dataset, the values of the fields
pts,
opp_pts and
game_result should be consistent with each other. You can check this using the
.empty attribute:
>>> nba[(nba["pts"] > nba["opp_pts"]) & (nba["game_result"] != 'W')].empty True >>> nba[(nba["pts"] < nba["opp_pts"]) & (nba["game_result"] != 'L')].empty True
Fortunately, both of these queries return an empty
DataFrame.
Be prepared for surprises whenever you’re working with raw datasets, especially if they were gathered from different sources or through a complex pipeline. You might see rows where a team scored more points than their opponent, but still didn’t win—at least, according to your dataset! To avoid situations like this, make sure you add further data cleaning techniques to your Pandas and Python arsenal.
Combining Multiple Datasets
In the previous section, you’ve learned how to clean a messy dataset. Another aspect of real-world data is that it often comes in multiple pieces. In this section, you’ll learn how to grab those pieces and combine them into one dataset that’s ready for analysis.
Earlier, you combined two
Series objects into a
DataFrame based on their indices. Now, you’ll take this one step further and use
.concat() to combine
city_data with another
DataFrame. Say you’ve managed to gather some data on two more cities:
>>> further_city_data = pd.DataFrame( ... {"revenue": [7000, 3400], "employee_count":[2, 2]}, ... index=["New York", "Barcelona"] ... )
This second
DataFrame contains info on the cities
"New York" and
"Barcelona".
You can add these cities to
city_data using
.concat():
>>> all_city_data = pd.concat([city_data, further_city_data], sort=False) >>> all_city_data Amsterdam 4200 5.0 Tokyo 6500 8.0 Toronto 8000 NaN New York 7000 2.0 Barcelona 3400 2.0
Now, the new variable
all_city_data contains the values from both
DataFrame objects.
Note: As of Pandas version 0.25.0, the
sort parameter’s default value is
True, but this will change to
False soon. It’s good practice to provide an explicit value for this parameter to ensure that your code works consistently in different Pandas and Python versions. For more info, consult the Pandas User Guide.
By default,
concat() combines along
axis=0. In other words, it appends rows. You can also use it to append columns by supplying the parameter
axis=1:
>>> city_countries = pd.DataFrame({ ... "country": ["Holland", "Japan", "Holland", "Canada", "Spain"], ... "capital": [1, 1, 0, 0, 0]}, ... index=["Amsterdam", "Tokyo", "Rotterdam", "Toronto", "Barcelona"] ... ) >>> cities = pd.concat([all_city_data, city_countries], axis=1, sort=False) >>> cities revenue employee_count country capital Amsterdam 4200.0 5.0 Holland 1.0 Tokyo 6500.0 8.0 Japan 1.0 Toronto 8000.0 NaN Canada 0.0 New York 7000.0 2.0 NaN NaN Barcelona 3400.0 2.0 Spain 0.0 Rotterdam NaN NaN Holland 0.0
Note how Pandas added
NaN for the missing values. If you want to combine only the cities that appear in both
DataFrame objects, then you can set the
join parameter to
inner:
>>> pd.concat([all_city_data, city_countries], axis=1, join="inner") revenue employee_count country capital Amsterdam 4200 5.0 Holland 1 Tokyo 6500 8.0 Japan 1 Toronto 8000 NaN Canada 0 Barcelona 3400 2.0 Spain 0
While it’s most straightforward to combine data based on the index, it’s not the only possibility. You can use
.merge() to implement a join operation similar to the one from SQL:
>>> countries = pd.DataFrame({ ... "population_millions": [17, 127, 37], ... "continent": ["Europe", "Asia", "North America"] ... }, index= ["Holland", "Japan", "Canada"]) >>> pd.merge(cities, countries, left_on="country", right_index=True)
Here, you pass the parameter
left_on="country" to
.merge() to indicate what column you want to join on. The result is a bigger
DataFrame that contains not only city data, but also the population and continent of the respective countries:
Note that the result contains only the cities where the country is known and appears in the joined
DataFrame.
.merge() performs an inner join by default. If you want to include all cities in the result, then you need to provide the
how parameter:
>>> pd.merge( ... cities, ... countries, ... left_on="country", ... right_index=True, ... how="left" ... )
With this
left join, you’ll see all the cities, including those without country data:
Welcome back, New York & Barcelona!
Visualizing Your Pandas DataFrame
Data visualization is one of the things that works much better in a Jupyter notebook than in a terminal, so go ahead and fire one up. If you need help getting started, then check out Jupyter Notebook: An Introduction. You can also access the Jupyter notebook that contains the examples from this tutorial by clicking the link below:
Get Jupyter Notebook: Click here to get the Jupyter Notebook you’ll use to explore data with Pandas in this tutorial.
Include this line to show plots directly in the notebook:
>>> %matplotlib inline
Both
Series and
DataFrame objects have a
.plot() method, which is a wrapper around
matplotlib.pyplot.plot(). By default, it creates a line plot. Visualize how many points the Knicks scored throughout the seasons:
>>> nba[nba["fran_id"] == "Knicks"].groupby("year_id")["pts"].sum().plot()
This shows a line plot with several peaks and two notable valleys around the years 2000 and 2010:
You can also create other types of plots, like a bar plot:
>>> nba["fran_id"].value_counts().head(10).plot(kind="bar")
This will show the franchises with the most games played:
The Lakers are leading the Celtics by a minimal edge, and there are six further teams with a game count above 5000.
Now try a more complicated exercise. In 2013, the Miami Heat won the championship. Create a pie plot showing the count of their wins and losses during that season. Then, expand the code block to see a solution:
First, you define a criteria to include only the Heat’s games from 2013. Then, you create a plot in the same way as you’ve seen above:
>>> nba[ ... (nba["fran_id"] == "Heat") & ... (nba["year_id"] == 2013) ... ]["game_result"].value_counts().plot(kind="pie")
Here’s what a champion pie looks like:
The slice of wins is significantly larger than the slice of losses!
Sometimes, the numbers speak for themselves, but often a chart helps a lot with communicating your insights. To learn more about visualizing your data, check out Interactive Data Visualization in Python With Bokeh.
Conclusion
In this tutorial, you’ve learned how to start exploring a dataset with the Pandas Python library. You saw how you could access specific rows and columns to tame even the largest of datasets. Speaking of taming, you’ve also seen multiple techniques to prepare and clean your data, by specifying the data type of columns, dealing with missing values, and more. You’ve even created queries, aggregations, and plots based on those.
Now you can:
- Work with
Seriesand
DataFrameobjects
- Subset your data with
.loc,
.iloc, and the indexing operator
- Answer questions with queries, grouping, and aggregation
- Handle missing, invalid, and inconsistent data
- Visualize your dataset in a Jupyter notebook
This journey using the NBA stats only scratches the surface of what you can do with the Pandas Python library. You can power up your project with Pandas tricks, learn techniques to speed up Pandas in Python, and even dive deep to see how Pandas works behind the scenes. There are many more features for you to discover, so get out there and tackle those datasets!
You can get all the code examples you saw in this tutorial by clicking the link below:
Get Jupyter Notebook: Click here to get the Jupyter Notebook you’ll use to explore data with Pandas in this tutorial.
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Explore Your Dataset With Pandas
|
https://realpython.com/pandas-python-explore-dataset/
|
CC-MAIN-2021-25
|
refinedweb
| 7,996
| 66.13
|
Note: This post covers the basics of setting up Wicket on GAE. This subsequent post takes the next steps of setting up Spring and persistence with JDO and provides the source code of a sample application.
1. Use the Eclipse plugin to create a project
The Eclipse plugin can be installed via Eclipse Update Manager and it includes a Google App Engine SDK, so you don't need to install that separately. With the plugin, create a "New Web Application Project".
Uncheck "Use Google Web Toolkit" and leave "Use Google App Engine" activated. The project will contain a
srcfolder and
warfolder. It contains a simple "Hello, World" example and you can run it immediately. But let's get Wicket running, quickly.
2. Add Wicket jars
Add the following jars to the
war/WEB-INF/libdirectory and add them as external jars to the eclipse project:
- wicket-1.3.5.jar
- slf4j-api-1.5.2.jar
- slf4j-log4j12-1.4.2.jar
- log4j-1.2.13.jar
3. Turn on logging
Turn on logging, so that you can see any strange things, that may occur. App Engine won't tell you much by default.
- log4j.properties: log4j.logger.org.apache.wicket=DEBUG, A1
- java6 logging.properties: .level = INFO
4. Create a Wicket Application and a Page
Add
WicketApplication.java
package wicket;
public class WicketApplication extends WebApplication {
public Class getHomePage() {
return HomePage.class;
}
}
and a simple
HomePage.javaand
HomePage.html:
public class HomePage extends WebPage {
public HomePage() {
add(new Label("label", new Model("Hello, World")));
}
}
5. Set up WicketFilter
Add the Wicket servlet filter to
web.xml. Alternatively use
WicketServlet:
WicketFilter
org.apache.wicket.protocol.http.WicketFilter
applicationClassName
wicket.WicketApplication
WicketFilter
/wicket/* 6. Add HTTP session support
Enable HTTP session support (by default it's disabled) by adding the following line to
appengine-web.xml:
true
Now the app is ready to run, but there are still some issues.
7. Disable resource modification watching
When Wicket is started in development mode now, exceptions are raised because Wicket spawns threads to check resource files such as HTML files for modifications. Let's just disable resource modification checking by setting the resource poll frequency in our
WicketApplication's
init()method to
null.
protected void init() {
getResourceSettings().setResourcePollFrequency(null);
}
8. Disable SecondLevelCacheSessionStore
We cannot use the
SecondLevelCacheSessionStorein the app engine, which is the default. At least not with a
DiskPageStore, because this serializes the pages to the disk. But writing to the disk is not allowed in the app engine. You can use the simple
HttpSessionStoreimplementation instead (this increases HTTP session size), by overriding
newSessionStore()in our
WicketApplication:
protected ISessionStore newSessionStore() {
//return new SecondLevelCacheSessionStore(this, new DiskPageStore());
return new HttpSessionStore(this);
}
9. Run
From the context menu of the project, call
Run As -> Web Application. Open your browser and see the app running.
Download the demo as an eclipse poject.
25 comments:
thank you for this! works great so far.
I used 1.4-rc2, besides the the fact that slf4j is 1.2.14 I also have to add slf4j-api in addition to slf4j-log4j.
7. Disable resource modification watching
I know you have to do this, but have anyone figured out a way to disable resource caching in development without resource modification watching since one have to restart the debugger to se a change in a html page otherways.
Thanks! Since I run the demo app I immediately tried to use Wicket but keep encountering errors. I give up after 3 hours but thanks to this post at least its running.
"Turn on loggin, so that you can see any strange thins, that my occure"
!!!! Write out one hundred times !!!!
Turn on logging, so that you can see any strange things, that may occur
Ha ha
@Anonymous: Man, that's so terrible. Fixed.
I have a "hack" to reload resources in Wicket on Google App Engine
tHqnk you for this article. I used it as a reference in my article (in french) about the same subject.
Then i deploied a test application on gae. It worked, but I had a "little" problem with "Collections.unmodifiableList" - I hd to remove then to be GAE compatible - which is astonising...
Hi,
I am new to Wicket and your demo is a great resource. Would you be kind enough to extend your demo to show how to use Wicket with the Google data store?
@Troy Actually, I'm planning to to do a Wicket/GAE project. But sadly, currently I'm not having that much time working on it. So, it might take some time to see post on this topic.
For those who are playing with the DataStore: It seems that classes that has been "enhanced" by the DataNucleus JPA implementation can't be serialized. This means that you can't put them into the session :-(
Has anyone solved this or can anyone prove me wrong?
Regards,
Stefan
I tried to upload your example but only get a 404 when I click on the link on the first page. Did you really get this running deployed on GAE or just locally?
Regards,
Per
Merged your sample with an existing GAE/J application. Worked without any issues "on the real thing".
Merci!
mfg OK
@Per Lundholm: Actually, I did not deploy the sample application i posted, but I successfully deployed another app.
I had another look at the sample app. It includes an index.html with a wrong link. Try instead. This is where the WicketServlet is located.
Oh, thanks ... I should have figured that out myself. And the slash on the end is sooo important.
The resource modification watching will be fixed as of Wicket 1.4.0-RC6. The Wicket guys introduced a replaceable IModificationWatcher interface.
8. Disable SecondLevelCacheSessionStore
Is it possible to use memcache instead of session as a replacement to writing to file? If so, do you have any implementation ideas?
Thx, B
@bwinfrey: Sounds like a nice idea. Haven't tried yet, though.
Nice but why do you log via LOG4J when Wicket uses SLF4J and GAE JUL? Its better to use slf4j-jdk14.
I have an application running quite well on GAE using wicket + spring.
Unfortunately i'm having SERIOUS performance problems (3 seconds per request). Has anyone else developed a GAE+wicket app that is in use and performant?
i've the same performance problems - more than 3 secound per request (ajax). i tried in deployment and development mode - same problem. doeas anyone solved this problem?
Be sure you don't have the wicket-jmx jar in your app, it will fail with an error:
java.lang.NoClassDefFoundError: java.lang.management.ManagementFactory is a restricted class. Please see the Google App Engine developer's guide for more details.
I am still looking for a Big table based PageStore, I know Disk based page store isnt going to work, and
I dont want to keep every thing in session.
i had to add serialVersionUID to my wicket pages since they are stored in the session and serialized. I guess they are not required by Eclipse when I create a wicket page. before i did that, i was receiving an exception like:
javax.servlet.ServletException: java.lang.RuntimeException: java.io.InvalidClassException: wicket.LoginPage; local class incompatible: stream classdesc serialVersionUID
With wicket 1.5 use on application init getStoreSettings().setAsynchronous(false); or you will run into modifythreadgroup problem.
I have been getting an INTERNAL_SERVER_ERROR for my wicket page but I'm not getting any logging to give more details. On point #3 above, I understood that I should add the details you show to the filename that comes before the colon. Did you mean that I should put those somewhere in the project properties?
|
http://stronglytypedblog.blogspot.com/2009/04/wicket-on-google-app-engine.html?showComment=1319487629342
|
CC-MAIN-2017-17
|
refinedweb
| 1,283
| 58.69
|
Assign CSS to DataGrid header
Discussion in 'ASP .Net Datagrid Control' started by Maziar Aflatoun, Dec assign CSS element to ASP.NET namespaceLeor Amikam, Jul 10, 2005, in forum: ASP .Net
- Replies:
- 4
- Views:
- 7,426
- Eliyahu Goldin
- Jul 10, 2005
Header files with "header.h" or <header.h> ??mlt, Jan 31, 2009, in forum: C++
- Replies:
- 2
- Views:
- 1,043
- Jean-Marc Bourguet
- Jan 31, 2009
how to assign a css class to datagrid header that is a link?djc, Nov 5, 2004, in forum: ASP .Net Datagrid Control
- Replies:
- 2
- Views:
- 214
- djc
- Nov 6, 2004
Any way to reorder how a datagrid is drawn (header/footer/items vs.header/items/footer)?Henrik, Jul 3, 2006, in forum: ASP .Net Datagrid Control
- Replies:
- 1
- Views:
- 560
- Ken Cox [Microsoft MVP]
- Jul 6, 2006
datagrid having row header and column header, Jul 13, 2006, in forum: ASP .Net Datagrid Control
- Replies:
- 0
- Views:
- 253
|
http://www.thecodingforums.com/threads/assign-css-to-datagrid-header.761422/
|
CC-MAIN-2015-35
|
refinedweb
| 155
| 83.76
|
ldns_rr_new man page
ldns_rr_new, ldns_rr_new_frm_type, ldns_rr_new_frm_str, ldns_rr_new_frm_fp, ldns_rr_free, ldns_rr_print — ldns_rr creation, destruction and printing
Synopsis
#include <stdint.h>
#include <stdbool.h>
#include <ldns/ldns.h>
ldns_rr* ldns_rr_new(void);
ldns_rr* ldns_rr_new_frm_type(ldns_rr_type t);
ldns_status ldns_rr_new_frm_str(ldns_rr **n, const char *str, uint32_t default_ttl, const ldns_rdf *origin, ldns_rdf **prev);
ldns_status ldns_rr_new_frm_fp(ldns_rr **rr, FILE *fp, uint32_t *default_ttl, ldns_rdf **origin, ldns_rdf **prev);
void ldns_rr_free(ldns_rr *rr);
void ldns_rr_print(FILE *output, const ldns_rr *rr);
Description
ldns_rr_new() creates a new rr structure.
Returns ldns_rr *
ldns_rr_new_frm_type() creates a new rr structure, based on the given type. alloc enough space to hold all the rdf's
ldns_rr_new_frm_str() creates an rr from a string. The string should be a fully filled-in rr, like ownername <space> TTL <space> CLASS <space> TYPE <space> RDATA.
n: the rr to return
str: the string to convert
default_ttl: default ttl value for the rr. If 0 DEF_TTL will be used
origin: when the owner is relative add this. The caller must ldns_rdf_deep_free it.
prev: the previous ownername. if this value is not NULL, the function overwrites this with the ownername found in this string. The caller must then ldns_rdf_deep_free it.
Returns a status msg describing an error or LDNS_STATUS_OK
ldns_rr_new_frm_fp() creates a new rr from a file containing a string.
rr: the new rr
fp: the file pointer to use
default_ttl: pointer to a default ttl for the rr. If NULL DEF_TTL will be used the pointer will be updated if the file contains a $TTL directive
origin: when the owner is relative add this the pointer will be updated if the file contains a $ORIGIN directive The caller must ldns_rdf_deep_free it.
prev: when the owner is whitespaces use this as the * ownername the pointer will be updated after the call The caller must ldns_rdf_deep_free it.
Returns a ldns_status with an error or LDNS_STATUS_OK
ldns_rr_free() frees an RR structure
*rr: the RR to be freed
Returns void
ldns_rr_print() Prints the data in the resource record to the given file stream (in presentation format)
output: the file stream to print to
rr: the resource record to print
Returns void
Licensed under the BSD License. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See Also
ldns_rr, ldns_rr_list. And perldoc Net::DNS, RFC1034, RFC1035, RFC4033, RFC4034 and RFC4035.
Remarks
This manpage was automatically generated from the ldns source code by use of Doxygen and some perl.
Referenced By
The man pages ldns_rr_free(3), ldns_rr_new_frm_fp(3), ldns_rr_new_frm_str(3) and ldns_rr_new_frm_type(3) are aliases of ldns_rr_new(3).
|
https://www.mankier.com/3/ldns_rr_new
|
CC-MAIN-2017-43
|
refinedweb
| 415
| 62.38
|
How to initiate a binding property updating by change another property from javascript?
- DenisFromNN last edited by DenisFromNN
Hello
I have some QML sample which sets a source of image and automatically changes its width and I want the width of parent has been also updated without explicit intervention
import QtQuick 2.3 import QtQuick.Controls 1.3 Item { width: 300; height: 300 Item { id: item width: element.width height: element.height Column { id: element Image { id: image } Label { id: label text: "K" } } Component.onCompleted: { console.log ("Width item: " + item.width + " element: " + element.width + " image: " + image.width + " label: " + label.width) image.source = "img.bmp" console.log ("Width item: " + item.width + " element: " + element.width + " image: " + image.width + " label: " + label.width) } } }
Console output shows that width of image is updated, but width of column (named "element") don't. How to do it automatically updated?
This post is deleted!
- chrisadams last edited by
You should be able to do this declaratively, by declaring either:
width: image.width
OR
width: childrenRect.width
for the "element" column.
Note that this will "bind" the values. Binding is automatic in QML declarations. If you wish to bind from an imperative statement (ie, from within a JavaScript function) you can use:
width = Qt.binding(function() { image.width });
Hope this helps!
|
https://forum.qt.io/topic/57918/how-to-initiate-a-binding-property-updating-by-change-another-property-from-javascript
|
CC-MAIN-2019-43
|
refinedweb
| 212
| 55.1
|
Introduction to Bazel: Common C++ Build Use Cases
Here you will find some of the most common use cases for building C++ projects with Bazel. If you have not done so already, get started with building C++ projects with Bazel by completing the tutorial Introduction to Bazel: Build a C++ Project.
Contents
- Including multiple files in a target
- Using transitive includes
- Adding include paths
- Including external libraries
- Writing and running C++ tests
- Adding dependencies on precompiled libraries
Including multiple files in a target
You can include multiple files in a single target with glob. For example:
cc_library( name = "build-all-the-files", srcs = glob(["*.cc"]), hdrs = glob(["*.h"]), )
With this target, Bazel will build all the
.cc and
.h files it finds in the
same directory as the
BUILD file that contains this target (excluding
subdirectories).
Using transitive includes
If a file includes a header, then the file’s rule should depend on that header’s
library. Conversely, only direct dependencies need to be specified as
dependencies. For example, suppose
sandwich.h includes
bread.h and
bread.h includes
flour.h.
sandwich.h doesn’t include
flour.h (who wants
flour in their sandwich?), so the
BUILD file would look like this:
cc_library( name = "sandwich", srcs = ["sandwich.cc"], hdrs = ["sandwich.h"], deps = [":bread"], ) cc_library( name = "bread", srcs = ["bread.cc"], hdrs = ["bread.h"], deps = [":flour"], ) cc_library( name = "flour", srcs = ["flour.cc"], hdrs = ["flour.h"], )
Here, the
sandwich library depends on the
bread library, which depends
on the
flour library.
Adding include paths
Sometimes you cannot (or do not want to) root include paths at the workspace root. Existing libraries might already have an include directory that doesn’t match its path in your workspace. For example, suppose you have the following directory structure:
└── my-project ├── legacy │ └── some_lib │ ├── BUILD │ ├── include │ │ └── some_lib.h │ └── some_lib.cc └── WORKSPACE
Bazel will expect
some_lib.h to be included as
legacy/some_lib/include/some_lib.h, but suppose
some_lib.cc includes
"include/some_lib.h". To make that include path valid,
legacy/some_lib/BUILD will need to specify that the
some_lib/
directory is an include directory:
cc_library( name = "some_lib", srcs = ["some_lib.cc"], hdrs = ["include/some_lib.h"], copts = ["-Ilegacy/some_lib/include"], )
This is especially useful for external dependencies, as their header files
must otherwise be included with a
/ prefix.
Including external libraries
Suppose you are using Google Test. You
can use one of the
http_archive strip
this prefix by adding the
strip_prefix attribute:", strip_prefix = "googletest-release-1.7.0", )
Then
gtest.BUILD would look like this:
cc_library( name = "main", srcs = glob( ["src/*.cc"], exclude = ["src/gtest-all.cc"] ), hdrs = glob([ "include/**/*.h", "src/*.h" ]), copts = ["-Iexternal/gtest/include"], linkopts = ["-pthread"], visibility = ["//visibility:public"], )
Now
cc_ rules can depend on
@gtest//:main.
Writing and running C++ tests
For example, we could create a test
./test/hello-test.cc such as:
#include "gtest/gtest.h" #include "lib/hello-greet.h" TEST(HelloTest, GetGreet) { EXPECT_EQ(get_greet("Bazel"), "Hello Bazel"); }
Then create
./test/BUILD file for your tests:
cc_test( name = "hello-test", srcs = ["hello-test.cc"], copts = ["-Iexternal/gtest/include"], deps = [ "@gtest//:main", "//main:hello-greet", ], )
Note that in order to make
hello-greet visible to
hello-test, we have to add
"//test:__pkg__", to the
visibility attribute in
./main/BUILD.
Now you can use
bazel test to run the test.
bazel test test:hello-test
This produces the following output:
INFO: Found 1 test target... Target //test:hello-test up-to-date: bazel-bin/test/hello-test INFO: Elapsed time: 4.497s, Critical Path: 2.53s //test:hello-test PASSED in 0.3s Executed 1 out of 1 tests: 1 test passes.
Adding dependencies on precompiled libraries
If you want to use a library of which you only have a compiled version (for
example, headers and a
.so file) wrap it in a
cc_library rule:
cc_library( name = "mylib", srcs = ["mylib.so"], hdrs = ["mylib.h"], )
This way, other C++ targets in your workspace can depend on this rule.
|
https://docs.bazel.build/versions/0.29.0/cpp-use-cases.html
|
CC-MAIN-2020-24
|
refinedweb
| 653
| 61.22
|
Multi-Label Image Classification on Movies Poster using CNN
Multi-Label Image Classification in Python
In this project, we are going to train our model on a set of labeled movie posters. The model will predict the genres of the movie based on the movie poster. We will consider a set of 25 genres. Each poster can have more than one genre.
What is multi-label classification?
- In multi-label classification, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets.
- Multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance.
- In the multi-label problem, there is no constraint on how many of the classes the instance can be assigned to.
- Multiclass classification makes the assumption that each sample is assigned to one and only one label whereas Multilabel classification assigns to each sample a set of target labels.
Dataset
Dataset Link:
This dataset was collected from the IMDB website. One poster image was collected from one (mostly) Hollywood movie released from 1980 to 2015. Each poster image is associated with a movie as well as some metadata like ID, genres, and box office. The ID of each image is set as its file name.
You can even clone the github repository using the following command to get the dataset.
!git clone
Watch Full Video Here:.
tqdmis a progress bar library with good support for nested loops and Jupyter/IPython notebooks.
import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.layers import Flatten, Dense, Dropout, BatchNormalization, Conv2D, MaxPool2D from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing import image print(tf.__version__)
2.1.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from tqdm import tqdm
Here we are reading the dataset into a pandas dataframe using
pd.read_csv(). The dataset contains 7254 rows and 27 columns.
data = pd.read_csv('/content/Movies-Poster_Dataset/train.csv') data.shape
(7254, 27)
data.head() shows the first 5 rows of the dataset. The dataset contains the
Id of the image. The images are stored in a separate folder with the
Id as the name of the image file.
data.head()
The images in the dataset are of different sizes. To process the images we need to convert them into a fixed size.
tqdm() shows a progress bar while loading the images. We are going to loop through each image using its path. We are going to convert each image to a fixed size of 350×350. The values in the images are between 0 to 255. Neural networks work well with values between 0 to 1. Hence we are going to normalize the values by dividing all of the values by 255.
img_width = 350 img_height = 350 X = [] for i in tqdm(range(data.shape[0])): path = '/content/Movies-Poster_Dataset/Images/' + data['Id'][i] + '.jpg' img = image.load_img(path, target_size=(img_width, img_height, 3)) img = image.img_to_array(img) img = img/255.0 X.append(img) X = np.array(X)
100%|██████████| 7254/7254 [00:33<00:00, 216.79it/s]
X is a
numpy array which has 7254 images. Each image has the size 350×350 and is 3 dimensional as the image is a coloured image.
X.shape
(7254, 350, 350, 3)
Now we will see an image from
X.
plt.imshow(X[1])
We can get the
Genre of the above image from
data.
data['Genre'][1]
"['Drama', 'Romance', 'Music']"
Now we will prepare the dataset. We have already got the feature space in
X. Now we will get the target in
y. For that, we will
drop the
Id and
Genre columns from data.
y = data.drop(['Id', 'Genre'], axis = 1) y = y.to_numpy() y.shape
(7254, 25)
Now we will split the data into training and testing set with the help of
train_test_split().
test_size = 0.15 will keep 15% data for testing and 85% data will be used for training the model.
random_state controls the shuffling applied to the data before applying the split.
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0, test_size = 0.15)
This is the input shape which we will pass as a para,eter to the first layer of our model.
X_train[0].shape
(350, 350, 3)
Build CNN the first
Conv2D() layer we are learning a total of
16 filters with size of convolutional window as 3×3.2D downsamples the input representation by taking the maximum value over the window defined by
pool_size for each dimension along the features axis. The
pool_size is 2×2 in our model. 25 neurons because we are predicting a probability for 25 classes.
Sigmoid function is used because it exists between (0 to 1) and this facilitates us to predict a binary output. Even though we are predicting more than one label, if you see in the target
y all the values are either 0 or 1. Hence
Sigmoid function is the appropriate function for this model.
model = Sequential() model.add(Conv2D(16, (3,3), activation='relu', input_shape = X_train[0].shape)) model.add(BatchNormalization()) model.add(MaxPool2D(2,2)) model.add(Dropout(0.3)) model.add(Conv2D(32, (3,3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPool2D(2,2)) model.add(Dropout(0.3)) model.add(Conv2D(64, (3,3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPool2D(2,2)) model.add(Dropout(0.4)) model.add(Conv2D(128, (3,3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPool2D(2,2)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Dense(25, activation='sigmoid'))
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 348, 348, 16) 448 _________________________________________________________________ batch_normalization (BatchNo (None, 348, 348, 16) 64 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 174, 174, 16) 0 _________________________________________________________________ dropout (Dropout) (None, 174, 174, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 172, 172, 32) 4640 _________________________________________________________________ batch_normalization_1 (Batch (None, 172, 172, 32) 128 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 86, 86, 32) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 86, 86, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 84, 84, 64) 18496 _________________________________________________________________ batch_normalization_2 (Batch (None, 84, 84, 64) 256 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 42, 42, 64) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 42, 42, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 40, 40, 128) 73856 _________________________________________________________________ batch_normalization_3 (Batch (None, 40, 40, 128) 512 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 20, 20, 128) 0 _________________________________________________________________ dropout_3 (Dropout) (None, 20, 20, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 51200) 0 _________________________________________________________________ dense (Dense) (None, 128) 6553728 _________________________________________________________________ batch_normalization_4 (Batch (None, 128) 512 _________________________________________________________________ dropout_4 (Dropout) (None, 128) 0 _________________________________________________________________ dense_1 (Dense) (None, 128) 16512 _________________________________________________________________ batch_normalization_5 (Batch (None, 128) 512 _________________________________________________________________ dropout_5 (Dropout) (None, 128) 0 _________________________________________________________________ dense_2 (Dense) (None, 25) 3225 ================================================================= Total params: 6,672,889 Trainable params: 6,671,897 Non-trainable params: 992 _________________________________________________________________
Now we will
compile and
fit the model. We will use 5', loss = 'binary_crossentropy', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=5, validation_data=(X_test, y_test))
Train on 6165 samples, validate on 1089 samples Epoch 1/5 6165/6165 [==============================] - 720s 117ms/sample - loss: 0.6999 - accuracy: 0.6431 - val_loss: 0.5697 - val_accuracy: 0.7450 Epoch 2/5 6165/6165 [==============================] - 721s 117ms/sample - loss: 0.3138 - accuracy: 0.8869 - val_loss: 0.2531 - val_accuracy: 0.9071 Epoch 3/5 6165/6165 [==============================] - 718s 116ms/sample - loss: 0.2615 - accuracy: 0.9057 - val_loss: 0.2407 - val_accuracy: 0.9078 Epoch 4/5 6165/6165 [==============================] - 720s 117ms/sample - loss: 0.2525 - accuracy: 0.9085 - val_loss: 0.2388 - val_accuracy: 0.9096 Epoch 5/5 6165/6165 [==============================] - 723s 117ms/sample - loss: 0.2468 - accuracy: 0.9100 - val_loss: 0.2362 - val_accuracy: 0.9119_4<<
We can see that the validation accuracy is more than the training accuracy and the validation loss is less than the training loss. Hence the model is not overfitting.
Testing of model
Now we are going to test our model by giving it a new image. We will pre-process the image in the same way as we have pre-processed the images in the training and testing dataset. We will normalize them and convert them into a size of 350×350.
classes contains all the 25 classes which we have considered.
model.predict() will give us the probabilities for all the 25 classes. We will sort the probabilities using
np.argsort() and then select the classes having the top 3 probabilities.
img = image.load_img('saaho.jpg', target_size=(img_width, img_height, 3)) plt.imshow(img) img = image.img_to_array(img) img = img/255.0 img = img.reshape(1, img_width, img_height, 3) classes = data.columns[2:] print(classes) y_prob = model.predict(img) top3 = np.argsort(y_prob[0])[:-4:-1] for i in range(3): print(classes[top3[i]])
Index(['Action', 'Adventure', 'Animation', 'Biography', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Family', 'Fantasy', 'History', 'Horror', 'Music', 'Musical', 'Mystery', 'N/A', 'News', 'Reality-TV', 'Romance', 'Sci-Fi', 'Short', 'Sport', 'Thriller', 'War', 'Western'], dtype='object') Drama Action Adventure
As you can see for the above movie poster our model has selected 3 genres which are
Drama,
Action and
Adventure.
|
https://kgptalkie.com/multi-label-image-classification-on-movies-poster-using-cnn/
|
CC-MAIN-2021-17
|
refinedweb
| 1,556
| 59.6
|
You might have heard about tixy.land, the minimalist javascript playground for creative coding.
While scrolling on the tixy feed, being amazed by how creative people can be, I stumbled upon a discussion between Martin Kleppe, the creator of Tixy, and Gareth Heyes, a well known security researcher, regarding creating a Fuzzer for tixy:
As I had experience with code modification tools, I decided to try hacking something up and make a quick fuzzer!
Want to see the result first?
Possible flashes warning
Sure, click here!
Getting started
Tixy setup
Setting up tixy locally is pretty easy, especially since it's on github!
However, since I only wanted to change the javascript a bit, I wanted a single file,
index.html, without having to compile the CSS myself.
I ended up copying the html of the live version at tixy.land, and replacing the
script content with the not minified index.js from the repo, and replacing the imported
example.json with a variable having the same value.
Adding jscodeshift
JSCodeShift is a powerful tool to navigate and modify the AST of source code which I'll be using.
Adding jscodeshift was slightly harder than setting tixy up: As I wanted to keep things simple, I couldn't use the existing npm version as it would require compiling the project.
I ended up using Browserify to compile it to a single file:
npx browserify jscodeshift/index.js -o jscodeshift.min.js --standalone jscodeshift
I then copied this file into my project, added a reference to it in the HTML, and was ready to go!
Getting samples
To fuzz values, we need to start by gathering existing examples, ideally ones with interesting effects.
I ended up merging the existing
examples.json and the ones under the tixy.land/gallery page.
Making the fuzzer
With our setup in place, let's start thinking about how to actually implement the fuzzer.
Here is a rough outline of the plan:
- Pick random samples out of the examples
- Convert them to random fragments
- Merge the fragments together
To split the project into smaller samples, we need to figure out exactly where to split!
After analyzing few of the tixies on astexplorer, I ended up picking two distinct operations which can usually be extracted without issues:
Binary Expressions and Call Expressions!
Binary Expressions are mostly arithmetic operators such as
+ and
-, with few other exceptions.
You can view the complete list of these operators on the ast-types repository.
Extracting the binary expression node is picking both sides of the equation as well as the operator itself, and they are typically self-contained.
Call Expressions are function calls, such as
Math.random() or
sin(y). Like the binary expressions, they usually are self contained, and small enough to extract without issues.
Now that we know what to extract, let's start on the how to extract them!
Picking random samples is simple: Pick a size, and pick random elements of the
examples array!
In this case, I picked an arbitrary size of
8 for the biggest sample count, for no particular reason.
const sampleCount = Math.floor(Math.random() * 8) + 1 const samples = new Array(sampleCount).fill(0).map( () => pickRandom(allSamples) );
Separating them by valid fragments is where we start using JSCodeShift.
From a string with valid JavaScript code, we can create our own
jscodeshift instance with it by calling
jscodeshift(code).
Then, using the
.find method on the resulting object with a type such as
BinaryExpression gives us an array-like object with all of the binary expressions.
Finally, to convert the AST node returned by JSCodeShift back to JavaScript, we need to call the
toSource method.
Simple, isn't it?
Here is how the resulting code looks:
const availableOperations = []; const jsc = jscodeshift(sample); jsc.find("BinaryExpression").forEach(v => availableOperations.push(v)); const chosenSnippet = pickRandom(availableOperations); const resultingCode = jscodeshift(chosenSnippet).toSource();
Finally, doing this on all of our selected samples, and on both binary expressions and call expressions, we end up with an array of random code snippets.
Now, to merge the fragments back together, I decided to add a random operator between each of them.
Thankfully, as both sides should be valid JavaScript strings, there is no need to use JSCodeShift anymore, a simple concatenation will do.
const delimiters = ['+', '-', '*', '%', '|', '<<', '>>', '%', '^']; const newCode = fragments.reduce((acc, v) => { return (acc ? acc + pickRandom(delimiters) : '') + v; }, '');
Result
Where would be the fun of generating random snippets if we couldn't view the results!
I ripped the existing
nextExample function of the existing
tixy site and instead of using the next examples, used a random code snippet from the fuzzer.
Now, for the amazing results, I saved you the hassle of doing it yourself! Instead, you can visit garand.dev/projects/tixy/, and click on the tixy until you find interesting results!
For maximum viewing pleasure, I also swapped the gallery page to use my fuzzer instead of good examples:
Many of them are either a stroboscopic nightmare or an exact ripoff of the examples, but sometimes there are interesting patterns emerging.
Found any interesting ones? Please link to them in the comments! I'd love to see what can come out of this project!
Discussion (0)
|
https://dev.to/antogarand/making-a-simple-fuzzer-for-tixy-24jk
|
CC-MAIN-2022-33
|
refinedweb
| 865
| 56.66
|
We’re proud to announce the release of Cerbero Suite 4.0!
There are many new features, especially in the advanced version. Support for ARM32/ARM64 in Carbon and the inspection of Windows crash dumps stand out as two major additions.
All of our customers can upgrade at a 50% discount their licenses for the next 3 months! We value our customers and everyone who has bought a license since June should have already received a free upgrade for Cerbero Suite 4! If you fall in that category and haven’t received a new license, please check your spam folder and in case contact us at sales@cerbero.io. Everyone who has acquired a license before June,!
This is the full list of news:
+ added Carbon loader for Windows user address space
+ added Carbon loader for Windows DMP files
+ added Carbon support for ARM32 and ARM64
+ added Carbon support for PDB symbols
+ added support in Carbon to define data types
+ added memory analysis support for latest Windows 10 versions
– added Windows x64 setup
– added UI hook extensions
+ improved Windows memory analysis support
+ improved Windows DMP support
+ improved Carbon disassembly
+ improved Ghidra plugin and setup
+ improved decompiler output
– improved Hex Editor
– improved file stats view
– improved symbol demangling
– improved Python speed
– improved headers
– improved PE debug directory support
– improved PDB support
– improved dark mode support on macOS
– improved update check
– improved single view mode
– improved settings
– improved Python SDK
– updated SQLite to 3.32.0
– fixed bugs
Windows crash dumps
Inspecting Windows crash dumps is important for many software developers. Cerbero Suite lets you easily inspect both kernel and mini-dumps. You can view the code, load PDB symbols, inspect the call stack, threads, exception information, bug check information, memory and much more.
This feature does not rely on WinDBG and works on every supported platform!
Carbon support for ARM32 and ARM64
ARM32 and ARM64 are now supported in Carbon and naturally also in the Sleigh decompiler!
Carbon loader for Windows user address space
Every Windows address space can now be explored in Carbon, be it from a physical image or from a crash dump.
Carbon support for PDB symbols
PDB files can be automatically downloaded and imported into Carbon. This feature does not rely on Windows APIs and works on every supported platform.
Defining data types in Carbon
Data types can be defined in Carbon by pressing “D” or via the context menu.
The same data type can be reapplied by pressing “W”.
Memory analysis on the latest Windows 10 versions
We added the headers necessary to perform memory analysis on the latest Windows 10 versions.
Throughout the lifetime of the 4.x series, we’ll continue improving on the support for Windows 10!
Windows x64 edition
A Windows x64 edition has been long overdue, but we didn’t want to deprive our users from being able to run Cerbero Suite on older 32-bit versions of Windows, so we decided to keep both x86 and x64 editions!
UI hook extensions
A new type of extension has been introduced. The purpose of this extension type is to provide additional UI elements for specific parts of the UI. We currently use it to create Python plugins in our settings page.
Improved Carbon disassembly
We have improved Carbon all over the place: the analysis, UI, lists. The experience is now much more refined.
Improved Ghidra plugin and setup
We improved the native UI for Ghidra. By default now the assembly is shown in lower case, as we think it’s easier to read (this feature is configurable).
We also added one more toolbar button in Ghidra for the Cerbero Launcher, a way to launch Cerbero tools on the file currently open in Ghidra.
Setting up the native UI for Ghidra is now easier than ever: just go to the settings in Cerbero under ‘Ghidra’ and click on ‘Install Ghidra plugin’, select the root folder of Ghidra and that’s it! Cerbero will take care of the installation for you!
Improved decompiler output
We have improved the decompiler output by inferring the detection of deferred calls and literals from Carbon to it. A before/after screenshot comparison is worth more than a thousand words!
Hex Editor
Apart from fixing some bugs, we have improved the hex editor by providing a wait dialog with progress and abort to every major data operation.
File stats view
We tried to improve the file stats view by providing additional useful information for all the file formats which warranted it.
Improved symbol demangling
We have greatly improved symbols demangling both for Visual C++ and GCC. All type of mangled symbols are supported now!
Improved speed of Python
We now deploy the bytecode files for all our Python plugins in order to decrease their load time.
Improved updates
Cerbero Suite 4 makes the update process even easier than before. Hashes for updates have always been cryptographically verified, but now you can opt to download the update directly from the UI and that too is verified.
Improved PDB support
More PDB strucutres are now explorable from the UI.
Improved settings
Apart from the Ghidra plugin installer, there’s a new tab in the settings to create a portable distribution of Cerbero Suite 4.
Improved SDK
We have increased the amout of exposed SDK and added new APIs. Among the many things we have exposed is the Sleigh decompiler. Here’s a small code sample:
from Pro.UI import * from Pro.Carbon import * v = proContext().getCurrentView() c = v.getCarbon() d = CarbonSleighDecompiler(c) s = d.decompileToString(0x004028C9) print(s)
A Carbon instance can be created entirely from Python of course.
Improved single view mode
Single view mode is perhaps a barely known feature in Cerbero, but a rather useful one. If you press “Ctrl+Alt+S” while in a view, it will hide all other views. Pressing the shortcut again restores the previous state.
In Cerbero Suite 4 we have introduced the concept of dependent views and have updated single view mode to include them.
We can see an example of this by looking at a crash dump. When we are in the disassembly we would like to keep dependent views (like the call stack or the decompiler) visible when switching to single view mode.
Normal state:
Single view mode:
Apart from the news listed here, we have added many refinements and fixed many bugs.
We hope you enjoy this new release!
Happy hacking!
|
https://cerbero-blog.com/?p=1875
|
CC-MAIN-2022-40
|
refinedweb
| 1,072
| 61.97
|
.:
W_response call parameter in the URL to substitute “World” against another word):
from werkzeug.wrappers import Request, Response def application(environ, start_response): request = Request(environ) text = f"Hello {request.args.get('name', 'World')}!" response = Response(text, mimetype='text/plain') return response(environ, start_response)
And that’s all you need to know about WSGI.
Before we get started, let’s create the folders needed for this application:
/shortly /static /templates
The shortly folder is not a python package, but just something where we drop our files. Directly into this folder we will then put our main module in the following steps. The files inside the static folder are available to users of the application via HTTP. This is the place where CSS and JavaScript files go. Inside the templates folder we will make Jinja2 look for templates. The templates you create later in the tutorial will go in this directory.
Now let’s get right into it and create a module for our application. Let’s create a file called shortly.py in the shortly folder. At first we will need a bunch of imports. I will pull in all the imports here, even if they are not used right away, to keep it from being confusing:
import os import redis from werkzeug.urls import url_parse from werkzeug.wrappers import Request, Response from werkzeug.routing import Map, Rule from werkzeug.exceptions import HTTPException, NotFound from werkzeug.middleware.shared_data import SharedDataMiddleware from werkzeug.utils import redirect from jinja2 import Environment, FileSystemLoader
Then we can create the basic structure for our application and a function to create a new instance of it, optionally with a piece of WSGI middleware that exports all the files on the static folder on the web:
class Shortly(object): def __init__(self, config): self.redis = redis.Redis( config['redis_host'], config['redis_port'], decode_responses=True ) def dispatch_request(self, request): return Response('Hello World!') def wsgi_app(self, environ, start_response): request = Request(environ) response = self.dispatch_request(request) return response(environ, start_response) def __call__(self, environ, start_response): return self.wsgi_app(environ, start_response) def create_app(redis_host='localhost', redis_port=6379, with_static=True): app = Shortly({ 'redis_host': redis_host, 'redis_port': redis_port }) if with_static: app.wsgi_app = SharedDataMiddleware(app.wsgi_app, { '/static': os.path.join(os.path.dirname(__file__), 'static') }) return app
Lastly we can add a piece of code that will start a local development server with automatic code reloading and a debugger:
if __name__ == '__main__': from werkzeug.serving import run_simple app = create_app() run_simple('127.0.0.1', 5000, app, use_debugger=True, use_reloader=True)
The basic idea here is that our
Shortly class is an actual WSGI application. The
__call__ method directly dispatches to
wsgi_app. This is done so that we can wrap
wsgi_app to apply middlewares like we do in the
create_app function. The actual
wsgi_app method then creates a
Request object and calls the
dispatch_request method which then has to return a
Response object which is then evaluated as WSGI application again. As you can see: turtles all the way down. Both the
Shortly class we create, as well as any request object in Werkzeug implements the WSGI interface. As a result of that you could even return another WSGI application from the
dispatch_request method.
The
create_app factory function can be used to create a new instance of our application. Not only will it pass some parameters as configuration to the application but also optionally add a WSGI middleware that exports static files. This way we have access to the files from the static folder even when we are not configuring our server to provide them which is very helpful for development.
Now you should be able to execute the file with python and see a server on your local machine:
$ python shortly.py * Running on * Restarting with reloader: stat() polling
It also tells you that the reloader is active. It will use various techniques to figure out if any file changed on the disk and then automatically restart.
Just go to the URL and you should see “Hello World!”.
Now that we have the basic application class, we can make the constructor do something useful and provide a few helpers on there that can come in handy. We will need to be able to render templates and connect to redis, so let’s extend the class a bit:
def __init__(self, config): self.redis = redis.Redis(config['redis_host'], config['redis_port']) template_path = os.path.join(os.path.dirname(__file__), 'templates') self.jinja_env = Environment(loader=FileSystemLoader(template_path), autoescape=True) def render_template(self, template_name, **context): t = self.jinja_env.get_template(template_name) return Response(t.render(context), mimetype='text/html')
Next up is routing. Routing is the process of matching and parsing the URL to something we can use. Werkzeug provides a flexible integrated routing system which we can use for that. The way it works is that you create a
Map instance and add a bunch of
Rule objects. Each rule has a pattern it will try to match the URL against and an “endpoint”. The endpoint is typically a string and can be used to uniquely identify the URL. We could also use this to automatically reverse the URL, but that’s not what we will do in this tutorial.
Just put this into the constructor:
self.url_map = Map([ Rule('/', endpoint='new_url'), Rule('/
', endpoint='follow_short_link'), Rule('/ +', endpoint='short_link_details') ])
Here we create a URL map with three rules.
/ for the root of the URL space where we will just dispatch to a function that implements the logic to create a new URL. And then one that follows the short link to the target URL and another one with the same rule but a plus (
+) at the end to show the link details.
So how do we find our way from the endpoint to a function? That’s up to you. The way we will do it in this tutorial is by calling the method
on_ + endpoint on the class itself. Here is how this works:
def dispatch_request(self, request): adapter = self.url_map.bind_to_environ(request.environ) try: endpoint, values = adapter.match() return getattr(self, f'on_{endpoint}')(request, **values) except HTTPException as e: return e
We bind the URL map to the current environment and get back a
URLAdapter. The adapter can be used to match the request but also to reverse URLs. The match method will return the endpoint and a dictionary of values in the URL. For instance the rule for
follow_short_link has a variable part called
short_id. When we go to we will get the following values back:
endpoint = 'follow_short_link' values = {'short_id': 'foo'}
If it does not match anything, it will raise a
NotFound exception, which is an
HTTPException. All HTTP exceptions are also WSGI applications by themselves which render a default error page. So we just catch all of them down and return the error itself.
If all works well, we call the function
on_ + endpoint and pass it the request as argument as well as all the URL arguments as keyword arguments and return the response object that method returns.
Let’s start with the first view: the one for new URLs:
def on_new_url(self, request): error = None url = '' if request.method == 'POST': url = request.form['url'] if not is_valid_url(url): error = 'Please enter a valid URL' else: short_id = self.insert_url(url) return redirect(f"/{short_id}+") return self.render_template('new_url.html', error=error, url=url)
This logic should be easy to understand. Basically we are checking that the request method is POST, in which case we validate the URL and add a new entry to the database, then redirect to the detail page. This means we need to write a function and a helper method. For URL validation this is good enough:
def is_valid_url(url): parts = url_parse(url) return parts.scheme in ('http', 'https')
For inserting the URL, all we need is this little method on our class:
def insert_url(self, url): short_id = self.redis.get(f'reverse-url:{url}') if short_id is not None: return short_id url_num = self.redis.incr('last-url-id') short_id = base36_encode(url_num) self.redis.set(f'url-target:{short_id}', url) self.redis.set(f'reverse-url:{url}', short_id) return short_id
reverse-url: + the URL will store the short id. If the URL was already submitted this won’t be None and we can just return that value which will be the short ID. Otherwise we increment the
last-url-id key and convert it to base36. Then we store the link and the reverse entry in redis. And here the function to convert to base 36:
def base36_encode(number): assert number >= 0, 'positive integer required' if number == 0: return '0' base36 = [] while number != 0: number, i = divmod(number, 36) base36.append('0123456789abcdefghijklmnopqrstuvwxyz'[i]) return ''.join(reversed(base36))
So what is missing for this view to work is the template. We will create this later, let’s first also write the other views and then do the templates in one go.
The redirect view is easy. All it has to do is to look for the link in redis and redirect to it. Additionally we will also increment a counter so that we know how often a link was clicked:
def on_follow_short_link(self, request, short_id): link_target = self.redis.get(f'url-target:{short_id}') if link_target is None: raise NotFound() self.redis.incr(f'click-count:{short_id}') return redirect(link_target)
In this case we will raise a
NotFound exception by hand if the URL does not exist, which will bubble up to the
dispatch_request function and be converted into a default 404 response.
The link detail view is very similar, we just render a template again. In addition to looking up the target, we also ask redis for the number of times the link was clicked and let it default to zero if such a key does not yet exist:
def on_short_link_details(self, request, short_id): link_target = self.redis.get(f'url-target:{short_id}') if link_target is None: raise NotFound() click_count = int(self.redis.get(f'click-count:{short_id}') or 0) return self.render_template('short_link_details.html', link_target=link_target, short_id=short_id, click_count=click_count )
Please be aware that redis always works with strings, so you have to convert the click count to
int by hand.
And here are all the templates. Just drop them into the templates folder. Jinja2 supports template inheritance, so the first thing we will do is create a layout template with blocks that act as placeholders. We also set up Jinja2 so that it automatically escapes strings with HTML rules, so we don’t have to spend time on that ourselves. This prevents XSS attacks and rendering errors.
layout.html:
new_url.html:
{% extends "layout.html" %} {% block title %}Create New Short URL{% endblock %} {% block body %}
Submit URL{% endblock %}
short_link_details.html:
{% extends "layout.html" %} {% block title %}Details about /{{ short_id }}{% endblock %} {% block body %}
/{{ short_id }}
{% endblock %}{% endblock %}
- Full link
- {{ link_target }}
- Click count:
- {{ click_count }}
For this to look better than ugly black and white, here a simple stylesheet that goes along:
static/style.css:
body { background: #E8EFF0; margin: 0; padding: 0; } body, input { font-family: 'Helvetica Neue', Arial, sans-serif; font-weight: 300; font-size: 18px; } .box { width: 500px; margin: 60px auto; padding: 20px; background: white; box-shadow: 0 1px 4px #BED1D4; border-radius: 2px; } a { color: #11557C; } h1, h2 { margin: 0; color: #11557C; } h1 a { text-decoration: none; } h2 { font-weight: normal; font-size: 24px; } .tagline { color: #888; font-style: italic; margin: 0 0 20px 0; } .link div { overflow: auto; font-size: 0.8em; white-space: pre; padding: 4px 10px; margin: 5px 0; background: #E5EAF1; } dt { font-weight: normal; } .error { background: #E8EFF0; padding: 3px 8px; color: #11557C; font-size: 0.9em; border-radius: 2px; } .urlinput { width: 300px; }
Look at the implementation in the example dictionary in the Werkzeug repository to see a version of this tutorial with some small refinements such as a custom 404 page.
|
https://getdocs.org/Werkzeug/docs/2.0.x/tutorial
|
CC-MAIN-2021-49
|
refinedweb
| 1,959
| 56.45
|
Hey actually it did work out...sorry the first time I tested it my program had problem in another part that I didn't notice...but now I figured out what the problem was, it works just fine :)
thanks...
Hey actually it did work out...sorry the first time I tested it my program had problem in another part that I didn't notice...but now I figured out what the problem was, it works just fine :)
thanks...
yeah
but I'm looking for a way to display all the elements of the array on JTextField
like this:
a1jTextArea.setText(a[1].toString + " " + a[2].toString+ " " +.........a[10].toString)
which...
tnx, but didn't quite get that: you mean like this: a1jTextArea.setText(a[1].toString);
any idea about the other questions?
I guess I need your help again!
this is the code for my Rational class that I asked in the first post, which is perfectly fine :>
public class Rational{
private long numerator;
...
hmm, thanks much...got it!
now why does it have to be "public static" and not just public?
no wasn't trying to implement Comparable interface...I deleted that part from my program....just want to make a regular method to compare two rationals!
hmmm, tnx
I'm not finished with this one yet! I will add those methods later, but it won't give me a compile error, or run error later if I don't add them right?
also, I made this method for...
Hi...been a while, but I was working on my code, and learned some new stuff!!
I've wrote this code so far:
public class Rational extends Number implements Comparable<Rational>{
...
um I know why it's not a value, cuz it's a reference variable! that's not what my question is about, I'm more asking for a solution to get a rational number by calling a constructor Rational!
I have...
Hi, although by now I know enough about making methods and constructors, they're definition...I don't know why I have problem with this exercise of my teacher...now I don't want anyone to write the...
Hi, I have this exercise and kinda confuses me...I needed some help from professionals to clear things up for me(this is a school project, we haven't finished the course yet, so maybe some of the...
changed it, still gives me error!!!!
but found it finally
same thing u said about arrays(confusing length and indexes)...I made a mistake in declaring the matrix array...it should have been a...
GODDAMN IT!!!!!
wait but I changed it to 9, still get the same error, except it's 9 instead of 8 in the error: java.lang.ArrayIndexOutOfBoundsException: 8
intBet is the user's input...the program is part of a GUI made by netBeans GUI maker!!! so I wasn't sure I should put it all here!
--- Update ---
@Chris Brown
I see what you mean
I got my...
looks like I'm actually having a problem
this is the code I wrote so far in order to get a matrix:
package matrix.inverse;
import java.util.*;
I think I'm good (for now lol)....I was wondering how to use the method String.split()....did some google and found out it has to be like: StringName.split(" "), with space in between quotation...
sorry for the delay...and thanks for the responses
all the digits after a space has to be considered one number, for example 1 23 4 7 568 should be considered as 1,23,4,7,568....so the element of...
Hi, I'm working on a program about matrices...for getting the different elements of the matrix I must prompt the user as below:
prompt example: Enter a11, a12, a13, a21, a22, a23, a31, a32, a33: ...
Hi again
I know it's been a while, but wrote a piece of code, and can't find what my mistake is!
it's about variables again
public int result;
private void...
Oooooh "the same class or package"...now I get it...and thanks for the link
Ok thanks a lot
So you mean in general, we only use public, private, protected,etc... only when we want to make a class that is going to be used in some other program, or external codes?! in this...
Hi,I needed to know why we should use modifiers for variables?
I have this piece of code:
import java.util.*;
public class Test{
public String student;
Hi everyone
I was studying about data types in Java, I have some basic questions
what is the meaning of default value for a data type?! what's the use of it?
for example:
default value of int...
Got it, I'm so stupid and thanks much :D
it was in my if loop...I should have had variable u instead of j!
|
http://www.javaprogrammingforums.com/search.php?s=9dc2592d734a66005ddc01c2d16cf6ca&searchid=2051606
|
CC-MAIN-2016-07
|
refinedweb
| 819
| 76.11
|
Parsing & type inference (Algorithm W in Haskell), and thoughts about Union types15 Dec 2014 Loyc, but I digress.)
case (\x.x) () of () -> "Output"), but the type inference algoritm is cleanly divided into three parts (equation collection, unification, solving), and part 1 includes a discussion of how that weird CS notation relates to actual source code, as least in terms of the notation used by my professor (evidently each professor has a different notation).
The implementation consists of three files, the parser (ParserA4.hs), the type inference and runtime engines (Ass4.hs), and an example file which you can run directly (examples.txt). Each is self-documenting. If you run
ghci Ass4.hs you can see all the type equations collected by part one using
testCollect, e.g.
testCollect "(Y \\fact x. if x<=1 then 1 else x*(fact (x-1))) 5". Run main or repl to test the whole language (typing and execution)..
How Haskell annoyed me
Writing these two assignments also drew my attention to a design flaw of Haskell, or at least an annoying limitation, which clearly makes it hard to build large software systems. Indeed, since Haskell doesn’t allow you to construct lists of a type class like such as
Show a, union types.
The code is divided into two parts, a parser (ParserA4.hs) and the main assignment (Ass4.hs). The parser defines this type, which is the data type of literals that it can parse:
data Value = IntVal Int | FloatVal Float | CharVal Char | BoolVal Bool | ListVal [Value] | UnitVal | PairVal (Value, Value) deriving (Eq)
Actually PairVal is kind of superfluous for the parser, because while the parser can parse the comma operator for pair construction, it does not actually support pair literals and does not use the
PairVal constructor.
The parser also defines the expression type for lambda terms, which includes a Literal with a Value:)
The assignment itself (Ass4.hs), however, needs to support not just the 7 types of
Value listed, but also closures of compiled run-time code (lists of
CesInstr, which are instructions for the simple “CES virtual machine”). So I defined this second type:
-- We must support closures as well as normal Values data Value' = V Value | Closure [CesInstr] [Value'] | YClosure [CesInstr] [Value'] deriving (Eq, Show)
(There are two types of closures, due to the fact that the professor decided that recursive functions built with the Y combinator would work differently than those built without it.) Since Haskell doesn’t have a concept of inheritance, I build
Value' with composition instead and a new constructor
V. It’s a bit annoying that I can’t “inherit”
Value, but hey, it works, right?
This causes a limitation at run time, though, one that I didn’t mention when I handed in the assignment. Remember that there is a
ListVal [Value] constructor. This means that lists can only contain values of type
Value, not
Value'. This, in turn, means that my runtime engine cannot support lists of closures. You’ll get a runtime error if you try to store a lambda function in a list. In principle it is perfectly possible, but because of the types involved, you can’t.
How can we solve this problem? In Haskell there are multiple ways, none of which are very satisfying.
The first solution is that we could modify the definition of
Value in the ParserA4 module to support closures. But this would be inappropriate. The parser module is supposed to define the parser–it knows nothing about the runtime engine, and it would be inappropriate to define large types like
CesInstr in the Parser module since
CesIn.
The second solution is to define a completely separate type for run-time values versus compile-time values:
-- Runtime version of Value data Value' = IntVal' Int | FloatVal' Float | CharVal' Char | BoolVal' Bool | ListVal' [Value] | UnitVal' | PairVal' (Value, Value) | Closure' [CesInstr] [Value'] | YClosure' [CesInstr] [Value'] deriving (Eq)
But this makes a lot of extra work for you.
- Either you have to put apostrophes on most of your new type contructors, or you have to
import qualifiedthe parser and manually change all references to the parser module.
- You also have to write conversion routines that manually map each and every kind of
Valueto
Value'. This is annoying to do in a toy compiler–how much more annoying would it be in a “real-life” compiler?
Neither of these solutions is fun. I think that this problem the solution to this kind of problem has already been solved nicely by Ceylon, which has a feature called union types. If Haskell supported union types, then a type like
data Color = Red | Green | Blue | Custom Int
would be treated as four separate types, with
Color as an alias for the union of those four types. This immediately solves the ugly-composition problem; I could write something like
data Value' = existing Value | Closure [CesInstr] [Value'] | YClosure [CesInstr] [Value'] deriving (Eq, Show)
Where “
existing” would be a way to re-use existing type constructor(s) in a new union type.
Then I could just write
(IntVal 5) instead of
(V (IntVal 5)), and it would simultaneously be a valid value of type
Value and
Value' . But what about closures? How can I support lists of closures without modifying the parser module?
Well, remember that what makes this problem so annoying is that I have to write a big function to map
IntVal to
IntVal',
FloatVal to
FloatVal', etc. I have to write code for all the type constructors even though the actual problem I’m trying to solve involves only lists and closures.
If we had union types, we could create a new runtime list type that supports closures, but keep all the other values the same. It would work something like this: I would define a new
Value' type as before, but it would re-use most of the type constructors that already exist.
data Value' = existing (IntVal | FloatVal | CharVal | BoolVal | UnitVal | PairVal) | ListVal' [Value'] | Closure [CesInstr] [Value'] | YClosure [CesInstr] [Value'] deriving (Eq)
Notice that
Value' is the same as
Value except that (1)
ListVal has been removed and (2)
ListVal',
Closure, and
YClosure have been added.
I still need a function to convert from
Value to
Value', but it’s very simple:
toRuntimeValue v = case v of ListVal list -> ListVal' (map toRuntimeValue list) v' -> v'
This is just a type-changing trick. In this code,
list has type
[Value] which really means
[IntVal|FloatVal|ListVal...], where the union of possible types does not include
Closure.
ListVal' is defined such that it can contain
Closure and
YClosure, as desired for the runtime engine. Therefore we have to switch from
ListVal to
ListVal', and the expression
map toRuntimeValue list is needed in case the list itself contains other
ListVals that also need to be converted to
ListVal'.
The inferred return value of
toRuntimeValue will be
IntVal|FloatVal|ListVal'|.... This type is not equal to
Value', because
Closure and
YClosure are not among the possible output types, but it would be a subtype of
Value', so the compiler would allow me to add this type signature:
toRuntimeValue :: Value -> Value' toRuntimeValue v = case v of ListVal list -> ListVal' (map toRuntimeValue list) v' -> v'
Note that inside the last case,
v' would have type
IntVal|FloatVal|... where
ListVal is not one of the possibilities; thus
v' is compatible with
Value', or in other words
v' is a subtype of
Value'.
Hope that makes sense. Anyway, it’s just a thought. It’s possible that Haskell’s type system has some limitations that would prevent such a feature from being added, I don’t know.
P.S. I would have liked to explore a slightly different solution in which the
ListVal constructor can hold a specified type:
data Value v = IntVal Int ... | ListVal [v] ... deriving (Eq)
That way we wouldn’t need separate
ListVal and
ListVal' constructors, we’d use
Value Value and
Value' Value' instead. But there’s an obvious problem–
Value Value is ill-formed.
Value requires one type parameter, and we want to essentially give the same type to itself as its own parameter, making an infinite cycle
Value (Value (Value (Value..., which is illegal. I have run into this problem at least once before in C#, but I don’t have a proposal handy for solving this problem in the type system.
Edit: I’ve since noticed that there are a large number of exotic and less-known type system features in Haskell. It would not be surprising if the problem I’ve described can be avoided with one of them.
|
http://loyc.net/2014/type-inference-haskell.html
|
CC-MAIN-2019-26
|
refinedweb
| 1,428
| 59.03
|
No Access Point SSID Detected
- Sn3akyP3t3 last edited by
I seem to be jumping from problem to problem with this little GPY board! I'm trying to flash the LTE firmware from the SD card, but I'm not able to connect to my home WiFi. I figured I'd simplify the matter and let the module run its own Access Point, but nothing I'm trying seems to be broadcasting an SSID. For example the following I put into the boot.py and upload, but I see nothing in Windows network detection nor on my Android phone.
Most basic! I would expect some default SSID name from this, but I don't know:
from network import WLAN wlan = WLAN()
I took the above and added one line from documentation which didn't show up either:
from network import WLAN wlan = WLAN() wlan.init(mode=WLAN.AP, ssid='wipy-wlan', auth=(WLAN.WPA2,''), channel=7, antenna=WLAN.INT_ANT)
@Sn3akyP3t3 said in No Access Point SSID Detected:
What is the underlying reason for the need to do this?
If I got this right, the reason is that
the partition table is changed
due to the new formatting mechanism of NVS introduced in IDF_v3.2 (1.20.1)
So, I am figuring something in NVRAM might be crucial to get right which otherwise might magically break the WiFi and/or LoRa bearers.
References
- Sn3akyP3t3 last edited by
This indeed solved the issue. What is the underlying reason for the need to do this? I didn't see anything wrong in code nor did I get any indication something wasn't kosher during flashing previously.
@Sn3akyP3t3 Please can you go the Pycom flasher folder and from a cmd / terminal type the following:
pycom-fwtool-cli.exe -p "your com port" erase_all
Re-flash your device and your AP should come up.
|
https://forum.pycom.io/topic/5309/no-access-point-ssid-detected
|
CC-MAIN-2021-10
|
refinedweb
| 308
| 70.63
|
Performance monitoring¶
If napari is not performing well, you can use
napari.utils.perf to help
diagnose the problem.
The module can do several things:
Time Qt Events
Display a dockable performance widget.
Write JSON trace files viewable with
chrome://tracing.
Time any function that you specify in the config file.
Monitoring vs. profiling¶
Profiling is similar to performance monitoring. However profiling usually involves running an external tool to acquire timing data on every function in the program. Sometimes this will cause the program to run so slowly it’s hard to use the program interactively.
Performance monitoring does not require running a separate tool to collect the timing information, however we do use Chrome to view the trace files. With performance monitoring napari can run at close to full speed in many cases. This document discusses only napari’s performance monitoring features. Profiling napari might be useful as well, but it is not discussed here.
Enabling perfmon¶
There are two ways to enable performance monitoring. Set the environment
variable
NAPARI_PERFMON=1 or set
NAPARI_PERFMON to the path of
a JSON configuration file, for example
NAPARI_PERFMON=/tmp/perfmon.json.
Note
Note: when using
NAPARI_PERFMON, napari must create the Qt Application.
If you are using
NAPARI_PERFMON=1 ipython, do not use
%gui qt before
creating a napari
Viewer.
Setting
NAPARI_PERFMON=1 does three things:
Times Qt Events
Shows the dockable performance widget.
Reveals the Debug menu which you can use to create a trace file.
Configuration file format¶
Example configuration file:
{ "trace_qt_events": true, "trace_file_on_start": "/tmp/latest.json", "trace_callables": [ "chunk_loader" ], "callable_lists": { "chunk_loader": [ "napari.components.chunk._loader.ChunkLoader.load_chunk", "napari.components.chunk._loader.ChunkLoader._done" ] } }
Configuration options¶
trace_qt_events¶
If true perfmon will time the duration of all Qt Events. You might want to turn this off if the overhead is noticeable, or if you want your trace file to be less cluttered.
trace_file_on_start¶
If a path is given, napari will start tracing immediately on start. In many cases this is much more convenient than using the Debug Menu. Be sure to exit napari using the Quit command. The trace file will be written on exit.
trace_callables¶
Specify which
callable_lists you want to trace. You can have many
callable_lists defined, but this setting says which should be traced.
callable_lists¶
These lists can be referenced by the
callable_lists option. You might
want multiple lists so they can be enabled separately.
Trace file¶
The trace file that napari produces is viewable in Chrome. Go to the
special URL
chrome://tracing. Use the Load button inside the Chrome
window, or just drag-n-drop your JSON trace file into the Chrome window.
You can also view trace files using the
Speedscope website.
It is similar to
chrome://tracing but has some different features.
The trace file format is specified in the Trace File Format Google Doc. The format is well-documented, but there are no pictures so it’s not always clear how a given feature actually looks in the Chrome Tracing GUI.
Example investigation¶
This is an example showing how you might use the
napari.utils.perf module.
Add a sleep¶
To simulate a performance problem in napari, add a
sleep() call to the
Labels.paint method, this
will make the method take at least 100 ms:
def paint(self, coord, new_label, refresh=True): import time time.sleep(0.1) if refresh is True: self._save_history()
Create a perfmon config file¶
Create a minimal perfmon config file
/tmp/perfmon.json that looks like this:
{ "trace_qt_events": true, "trace_file_on_start": "/tmp/latest.json", "trace_callables": [] }
This will write
/tmp/latest.json every time we run napari. This file is
only written on exit, and you must exit with the Quit commmand. Using
trace_file_on_start is often easier than manually starting a trace using
the Debug menu.
Run napari¶
Now run napari’s
add_labels example like this:
NAPARI_PERFMON=/tmp/perfmon.json python examples/add_labels.py
Use the paint tool and single-click once or twice on the labels layer. Look at the performance widget, it should show that some events took over 100ms. The performance widget is just to give you a quick idea of what is running slow:
The trace file will give you much more information than the performance widget. Exit napari using the Quit command so that it writes the trace file on exit.
View trace in Chrome¶
Run Chrome and go to the URL
chrome://tracing. Drag and drop
/tmp/latest.json into the Chrome window, or use the Load button to
load the JSON file. You will usually need to pan and zoom the trace to
explore it, to figure out what is going on.
You can navigate with the mouse, but using the keyboard might be easier.
Press the
AD keys to move left and right, and press the
WS keys to zoom
in or out. Both the
MouseButtonPress and
MouseMove events are slow. In
the lower pane the
Wall Duration field says it took over 100ms:
So we can see that some events are running slow. The next questions is
why are
MouseButtonPress and
MouseMove running slow? To answer this
question we can add more timers. In this case we know the answer, but often
you will have to guess or experiment. You might add some timers and then
find out they actually run fast, so you can remove them.
Add paint method¶
To add the
Labels.paint method to
the trace, create a new list of callables named
labels and put the
Labels.paint method into
that list.
{ "trace_qt_events": true, "trace_file_on_start": "/tmp/latest.json", "trace_callables": [ "labels" ], "callable_lists": { "labels": [ "napari.layers.labels.Labels.paint" ] } }
Create the new trace File¶
Run
add_labels as before, click with the paint tool, exit with the Quit
command.
View the new trace File¶
Drop
/tmp/latest.json into Chrome again. Now we can see that
MouseButtonPress calls
Labels.paint and that
Labels.paint is really responsible
for most of the time. After clicking on the event press the
m key, that
will highlight the event duration with arrows and print the duration right
on the timeline, in this case it says the event took 106.597ms:
When investigating a real problem we might have to add many functions to
the config file. It’s best to add timers that take a lot of time. If you
add a timer that’s called thousands of times, it will add overhead and will
clutter the trace file. In general we want to trace important and
interesting functions. If we create a large
callable_list we can save it
for future use.
Advanced¶
Experiment with the
napari.utils.perf features and
you will find your own tricks and techniques.
Create multiple
callable_lists and toggle them on or off depending on
what you are investigating. The perfmon overhead is low, but tracing only
what you care about will yield the best performance and lead to trace files
that are easier to understand.
Use the
perf_timer context object to
time only a block of code, or even a single line, if you don’t want to time
an entire function.
Use
add_instant_event and
add_counter_event to annotate
your trace file with additional information beyond just timing events. The
add_instant_event function draws a vertical line on the trace in Chrome,
to show when something happened like a click. The
add_counter_event
function creates a bar graph on the trace showing the value of some counter
at every point in time. For example you could record the length of a queue,
and see the queue grow and shrink over time.
Calls to
perf_timer,
add_instant_event and
add_counter_event should
be removed before merging code into main. Think of them like “debug
prints”, things you add while investigating a problem, but you do not leave
them in the code permanently.
You can save JSON files so that you can compare how things looked before and after your changes.
|
https://napari.org/howtos/perfmon.html
|
CC-MAIN-2022-40
|
refinedweb
| 1,305
| 66.64
|
setuid(2) setuid(2)
NAME [Toc] [Back]
setuid, setgid - set user and group IDs
SYNOPSIS [Toc] [Back]
#include <unistd.h>
int setuid(uid_t uid);
int setgid(gid_t gid);
DESCRIPTION [Toc] [Back]
setuid() sets the real-user-ID (ruid), effective-user-ID (euid),
and/or saved-user-ID (suid) of the calling process. The super-user's
euid is zero. The following conditions govern setuid's behavior:
+ If the euid is zero, setuid() sets the ruid, euid, and suid to
uid.
+ If the euid is not zero, but the argument uid is equal to the
ruid or the suid, setuid() sets the euid to uid; the ruid and
suid remain unchanged. (If a set-user-ID program is not
running as super-user, it can change its euid to match its
ruid and reset itself to the previous euid value.)
+ If euid is not zero, but the argument uid is equal to the
euid, and the calling process is a member of a group that has
the PRIV_SETRUGID privilege (see privgrp(4)), setuid() sets
the ruid to uid; the euid and suid remain unchanged.
setgid() sets the real-group-ID (rgid), effective-group-ID (egid),
and/or saved-group-ID (sgid) of the calling process. The following
conditions govern setgid()'s behavior:
+ If euid is zero, setgid() sets the rgid and egid to gid.
+ If euid is not zero, but the argument gid is equal to the rgid
or the sgid, setgid() sets the egid to gid; the rgid and sgid
remain unchanged.
+ If euid is not zero, but the argument gid is equal to the
egid, and the calling process is a member of a group that has
the PRIV_SETRUGID privilege (see privgrp(4)), setgid() sets
the rgid to gid; the egid and sgid remain unchanged.
RETURN VALUE [Toc] [Back]
Upon successful completion, setuid() and setgid() returned 0;
otherwise, they return -1 and set errno to indicate the error.
ERRORS [Toc] [Back]
setuid() and setgid() fail and return -1 if any of the following
conditions are encountered:
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003
setuid(2) setuid(2)
[EPERM] None of the conditions above are met.
[EINVAL] uid (gid) is not a valid user (group) ID.
WARNINGS [Toc] [Back]
It is recommended that the PRIV_SETRUGID capability be avoided, as it
is provided for backward compatibility. This feature may be modified
or dropped from future HP-UX releases. When changing the real user ID
and real group ID, use of setresuid() and setresgid() (see
setresuid(2)) are recommended instead.
AUTHOR [Toc] [Back]
setuid() was developed by AT&T, the University of California,
Berkeley, and HP.
setgid() was developed by AT&T.
SEE ALSO [Toc] [Back]
exec(2), getprivgrp(2), getuid(2), setresuid(2) privgrp(4).
STANDARDS CONFORMANCE [Toc] [Back]
setuid(): AES, SVID2, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1
setgid(): AES, SVID2, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1
Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003
|
http://nixdoc.net/man-pages/HP-UX/man2/setuid.2.html
|
CC-MAIN-2019-43
|
refinedweb
| 497
| 60.04
|
Functions.
Function Declaration
Function declarations are used to create named functions. These functions can be called using their declared name. Function declarations are built from:
- The function keyword.
- The function name.
- An optional list of parameters separated by commas enclosed by a set of parentheses
().
- A function body enclosed in a set of curly braces
{}.
The example code provided contains a function named
add() that takes in 2 values and prints the sum of those numbers:
Calling Functions
Functions can be called, or executed, elsewhere in code using parentheses following the function name. When a function is called, the code inside its function body runs. Arguments are values passed into a function when it is called.
The output would be:
Return Keyword
Functions return (pass back) values using the
return keyword.
return ends function execution and returns the specified value to the location where it was called.
A common mistake is to forget the
return keyword, in which case the function will return undefined by default.
Arrow Functions
Arrow function expressions were introduced in ES6. These expressions are clean and concise.keyword.
Arrow function with no arguments:
Arrow function with a single argument:
Arrow function with two arguments:
Concise arrow function:
Anonymous Functions
Anonymous functions in JavaScript do not have a name property. They can be defined using the
function keyword, or as an arrow function. See the code example for the difference between a named function and an anonymous function.
|
https://www.codecademy.com/resources/docs/javascript/functions
|
CC-MAIN-2021-43
|
refinedweb
| 241
| 57.16
|
In the previous post, we have explained about C# variables, now C# variables can be categorrized into 3 categories
Let's understand this one by one.
Value type variables are those, which can be assigned a value directly. These are the built-in primitive data types, such as char, int, and float, as well as user-defined types declared with struct. They are derived from the class System.ValueType.
When you declare an int type variable, the system allocates memory to store the value. In the previous tutorial, you can check the table which shows the memory alloted to int type = 4 .
To get the exact size of a type or a variable on a particular platform, you can use the sizeof method. The expression sizeof(type) yields the storage size of the object or type in bytes.
using System; namespace DataTypeCSharp { public class Program { public static void Main(string[] args) { Console.WriteLine("Size of int: {0}", sizeof(int)); } } }
Output:
Size of int: 4
The reference types do not contain the actual data stored in a variable, but they contain a reference to the variables.
Classes and other complex data types that are constructed from the primitive types. Variables of such types do not contain an instance of the type, but merely a reference to an instance
In other words, they refer to a memory location. Using more than one variable, the reference types can refer to a memory location. If the data in the memory location is changed by one of the variables, the other variable automatically reflects this change in value. Example of built in reference types are:object, dynamic and string.
The Object Type is the base class for all data types in C#, .NET Framework. Object is an alias for System.Object class, it is present in the System namespace.Every class in C# is directly or indirectly derived from the Object class. If a Class does not extend any other class then it is the direct child class of Object class and if extends other class then it is an indirectly derived.
So = 50; // this is boxing
In dynamic type variable, you are free to store any type of value in it in C# programming.Type checking for these types of variables takes place at run-time.
Here is the general form to declare a dynamic type in C# programming:
dynamic x = 20;
so basically, syntax of dynamic type variables is like
dynamic {variable_name}= {value};
String type is one of the most widely used data types. It allows you to assign the string values ( series of characters value) to a variable.
The string keyword refers to the object type of the System. String class
String str = "C# Datatype Tutorial";
A pointer type variable store the memory address of another type. Pointers in C# have the same capabilities as in C or C++.
Here is , how to decalre pointer in C#
type *identifier;
Example:
char *chpt; int *intpt;
|
https://qawithexperts.com/tutorial/c-sharp/7/csharp-data-type
|
CC-MAIN-2021-21
|
refinedweb
| 492
| 62.58
|
You can write Python rules in OH 3 today. However, to make doing so easier and more convenient there is a Helper Library that implements a number of annotations and library functions to make interacting with the openHAB Java classes more convenient.
The Rule engine is complete and works just fine.
For what ever reason, the Helper Libraries are not yet updated to work with OH 3. But when they are there should be an add-on that one can install to get those available. In the mean time one can manually install the library from GitHub by pulling the PR with the fixes for OH 3.
The answer to that question is going to be the same (or very nearly the same) in any of the rules languages.
Python:
import java.time.ZonedDateTime var now = ZonedDateTime.now() if now.isAfter(items["MyDateTimeItem"].getZonedDateTime()):
JavaScript:
var ZonedDateTime = Java.type("java.time.ZonedDateTime"); var now = ZonedDateTime.now(); if(now.isAfter(items["MyDateTimeItem"].getZonedDateTime())) {
Rule DSL
if(now.isAfter(MyDateTimeItem.state.getZonedDateTime)){
or if that doesn’t work
if(now.isAfter((MyDateTimeItem.state as DateTimeType).getZonedDateTime)){
For the most part, in Python, anything that is a part of Python 2.7 is available by default. Other libraries pip type libraries can be installed but it requires a special procedure. But most of the stuff you will be doing in rules will be interacting with openHAB. And that is generally going to be the same no matter what language you are using as you will be interacting with the openHAB Java Objects. It’s far easier to answer specific questions than to just dump the javadocs on you.
But in general:
Rules DSL: Pretty much everything you can use will be included by default. That’s why you don’t see an import for ZonedDateTime and you can just access now without calling the method on ZonedDateTime. But there are a lot of things that can’t be accessed (e.g. the Item Metadata Registry). Rules DSL also does not support libraries.
Python/JavaScript/Groovy: Except for the ItemRegistry (
ir, Item states are available through the
items dict), just about anything from openHAB you’ll need to use will need to be imported. That’s the sort of thing the Helper Library takes care of for you. Note that I’ve personally added an Issue to get more of this stuff added to the default context so we don’t have to import it.
Blockly: Not really complete at this time.
Probably not. At some point I suspect we will get GraalVM Python 3 support at which point you’ll have a choice between sticking with Jython (as long as it remains viable) and GraalVM Python 3. But it will likely at most require some minor edits (similar to what is required to rules to go from a major OH release anyway).
No, it’s completely compatible and there are a number of users using it. It’s the Helper Libraries on the main branch that are not compatible because OH 3 introduced a number of breaking changes (the move from Joda to ZonedDateTime, some core openHAB Actions moved, etc.). These are all fixed in a PR that has not been reviewed or accepted for some reason.
That sounds like a horribly over complicated configuration. Why this approach? Why not use the MQTT binding to talk to MQTT directly? Why involve Exec and Python at all?
tl;dr and conclusion: The Python add-on works just fine. The Helper Libraries are not yet available as an add-on but can be manually installed. The few minor changes required to make the Helper Libraries work with OH 3 are available in a PR. The Helper Libraries make interacting with openHAB easier but are not required (I’m not using them).
If your main concern is “staying close to the core” and reducing dependencies on extra stuff, I would recommend using JavaScript instead for rules. It comes by default (no add-on needs to be installed). However, at some point in the future this too will be updated from Nashorn JavaScript to GraalVM JavaScript at which point there will be some minor changes required to your existing rules mainly having to do with how libraries are loaded and used in the rules.
Your best source for “how do I do X in Python” will be looking at the Helper Library docs and the Helper Library code (if you don’t want to use the Helper Libraries themselves).
|
https://community.openhab.org/t/whats-the-deal-with-python-rules-in-openhab-3/115021/23
|
CC-MAIN-2022-27
|
refinedweb
| 752
| 73.27
|
The structure of a skeletal Java source file is depicted in Figure 2.2. A Java source file can have the following elements that, if present, must be specified in the following order:
An optional package declaration to specify a package name. Packages are discussed in Section 4.6.
Zero or more import declarations. Since import declarations introduce class and interface names in the source code, they must be placed before any type declarations. The import statement is discussed in Section 4.6.
Any number of top-level class and interface declarations. Since these declarations belong to the same package, they are said to be defined at the top level, which is the package level.
The classes and interfaces can be defined in any order. Class and interface declarations are collectively known as type declarations. Technically, a source file need not have any such definitions, but that is hardly useful.
The Java 2 SDK imposes the restriction that at the most one public class definition per source file can be defined. If a public class is defined, the file name must match this public class. If the public class name is NewApp, then the file name must be NewApp.java.
Classes are discussed in Section 4.2, and interfaces are discussed in Section 6.4.
Note that except for the package and the import statements, all code is encapsulated in classes and interfaces. No such restriction applies to comments and white space.
|
https://etutorials.org/cert/java+certification/Chapter+2.+Language+Fundamentals/2.5+Java+Source+File+Structure/
|
CC-MAIN-2021-21
|
refinedweb
| 242
| 66.64
|
max7, you might as well complain that PHP sucks because it was coded by gorillas, since you won't stop making stuff up here.
Printable View
max7, you might as well complain that PHP sucks because it was coded by gorillas, since you won't stop making stuff up here.
You do get the impression that he's just grabbing a handful of arguments he's read on other forums and then just throwing them at the screen and seeing what sticks.
Cheers,
D.
I am not that big php hater. I was surprised to see so many php lover. I hope someone will agree that python and ruby were designed by more clever guys then php.
PHP is full so many changes. I started to program in PHP3 then PHP4 then PHP5.
PHP 6 would be more different with unicode support and all these bad features removed.
What does that mean? I think it means that PHP team is fixing bad design concepts.
What will PHP6 look like? PHP6 will have more and more from Java.
Perl6 will compile to byte code. May be PHP7 will compile in byte code as well.
Actually products like zend and ioncube save php's binary tree as compiled.
Same is done by optimisator. xcode and apc
What we will get at the end of PHP evolution?
Do we need second Java or we need simple scripting language for simple scripts?
I think php is popular for it simplicity and it must stay simple. But it had to became secure years ago instead of trying to beat java.
max7, this is the PHP section of SitePoint the majority of us that have commented in on this thread are well versed in PHP, we know all the bumps and dips. Most of us also have other languages under our belts, we know what we are talking about. However, you on the other hand don't seem to have a clue. Are you here just to make yourself look like a fool?
Btw PHP already compiles to bytecode, what you think APC does? It cache the bytecode output of PHP.
Yes. I agree apc and xcode save php code in binary format. Most scripting languages convert scripts to bynary tree and then interpret this tree.
I predict that PHP7 compile to byte code.
In fact someone from php team wrote in first posts about php6 that it will compile to .NET byte code.
PHP already compiles to bytecode. It has done so as far back as PHP 3 at the most.
@max7 i have 2 sites running on same server cluster
combined they get 500,000 unique visitors / 2 million page views according to google analytics per day
both sites are hand coded by me and took years to grow and still growing
guess what they are coded and run on?
guess how many servers it takes?
guess how many times i got hacked?
i take security and efficiency as top concerns and daily measure and optimise the performance
oh and i come from a comp sci with C and Java background so i had a great laugh reading your posts, you trully made a fool of yourself
now to answer the thread starter and get back on topic
no i dont think php is going over the top, in fact i think its moving too slow
one is not forced to use them new features, you can code away in the old procedural style with no issues, the only thing that was forced was the php4 retirement (on which i started poll here before) and that took to long imho
personally i cant wait for php5.3 to come out, cant wait to organise my library with namespaces, cant wait to drop latest zend framework phar file and get on with the interesting bits
and then theres unicode, this really needs sorting fast, the sites mentioned above are localized and translated into a pile of languages but god damn it it was difficult to do :mad:
i also enviously look at .net developers, i used to do .net as well and some aspects make developers lives very easy, hell i even borrowed some of the concepts in my homebrew framework, and the tools! damnit eclipse just doesnt compare to visual studio, sigh
so to sum it up i think php has long way to go, and i sure hope it gets there faster otherwise developers at the top might move to greener pastures
anyways sorry for ranting but php does need more improvement and i sure hope they dont stop, i eagerly watch all the developments and provide feedback and bugfixes in projects such as zend framework
If you really are 13, then PHP4 came out when you were 5. I somehow doubt you started with PHP3.If you really are 13, then PHP4 came out when you were 5. I somehow doubt you started with PHP3.Quote:
I started to program in PHP3 then PHP4 then PHP5.
I doubt it. PHP was designed to do exactly what it does - so where they. Theres no level of intellegence involved.I doubt it. PHP was designed to do exactly what it does - so where they. Theres no level of intellegence involved.Quote:
I hope someone will agree that python and ruby were designed by more clever guys then php.
Besides, Look at sitepoint. It's probably the biggest professional (that rules out DP) website development community in the world. What's it written in?
I'll give you a hint - look at the file extention.
Oh Jake, you obviously misunderstood him. Seeing how weird this guy is, he probably downloaded PHP3, then PHP4 and then PHP5. I wouldn't be surprised if that whole process took one day, which would show his level of understanding PHP.
I have not said that I am 13. I said "What you say if I say I am 13"
Most of you said that you believe I am 13.
BTW PHP 3 was widely used after PHP4 release for long time.
Most hosting company was not updating software for very long time.
Servers was not able to update automatically and for free.
I remember RedHat 7.3 was asking to subscribe to get access to automatic updates. Subscription cost was $1500 or $2400 a year.
PHP is free and opensource.
In fact, it wouldn't be difficult to make PHP auto-update. They just choose not to just incase an unexpected security flaw is in a minor release. Whilst it would get updated quickly, some servers can't take the risk. That's why it's better done manually.
You make it sound like the outcome shows that we're the fool... :lol:You make it sound like the outcome shows that we're the fool... :lol:Quote:
Most of you said that you believe I am 13.
Seriously, ignore the trolling, there is no use in responding to that and it only gets this topic even further offtopic.
@max7
I don't think you'll get any arguments from anyone that PHP could have been designed better from the start. Of course it could have, especially with the many function naming schemes there seem to be. But anyone can look back on almost anything and say that it could have been better than it is today.
I love and have great respect for full object-oriented languages like Python and Ruby, as I'm sure many others here do. The point is (and what people here are trying to tell you) is that PHP works great, it's easy, and it's fast. And PHP is everywhere, on almost every single web hosting server in the world - even the Fortune 500 guys.
The reason everyone is jumping down your throat is not because they just simply can't stand to see PHP being bashed; It's that your claims are ridiculous and completely unfounded. You provide no proof of any of your claims, and commit countless logical fallacies in your arguments, most namely Post Hoc. You come to the conclusion that PHP itself is insecure because there are many applications made with PHP that are insecure. But using PHP does not mean your application will automatically be insecure. You are confusing cause and effect. Just because grass is green does not mean everything that's green is grass.
I am sure that if you had provided compelling evidence that PHP itself was insecure and was causing the applications built with PHP to be insecure from the start, the tone of this discussion would be very different.
right, this has gone on far too long - thread closed and apologies to blueyon for the way this thread went.
Any complaints - PM me
|
http://www.sitepoint.com/forums/printthread.php?t=573113&pp=25&page=4
|
CC-MAIN-2014-23
|
refinedweb
| 1,462
| 81.12
|
A Senior executive came rushing out of his office and shouted at his underlings: "Has anyone seen my pencil?". "It's behind your ear" replied one of the team. "Come on!", the executive demanded " I'm a busy man! Which ear?". We've all met them. These are the people for whom web page logins are a pain. They much prefer to have a document nicely formatted and printed, and put on their desk..
UPDATE: The chart controls are included as a native part of ASP.NET from version 4.0 onwards, which means that you do not need to download them separately if you are using VS2010.
I will be generating a chart using LINQ to SQL to connect to the Northwind database, which is available here. In ASP.NET Web Forms, the chart controls are just that - server controls that can be dragged and dropped onto the Form Designer, and configured there. Within MVC there is no place for server controls, so we have to programme against their API instead.
The Chart controls can be rendered in a number of ways within Web Forms but ultimately generate an image that can be displayed using an <img> tag from disk, or streamed to the browser using an HttpHandler. In respect of MVC, an img tag will suffice that points to a controller action which generates the image:
<div><img src="Chart/GetChart" /></div>
And the action itself:
public FileContentResult GetChart() { return File(Chart(), "image/png"); }
The action returns a FileContentResult, which is the actual image as a byte array. So the byte array needs to be generated via the Chart() method as follows:
private Byte[] Chart() { var db = new NorthwindDataContext(); var query = from o in db.Orders group o by o.Employee into g select new { Employee = g.Key, NoOfOrders = g.Count() }; var chart = new Chart { Width = 300, Height = 450, RenderType = RenderType.ImageTag, AntiAliasing = AntiAliasingStyles.All, TextAntiAliasingQuality = TextAntiAliasingQuality.High }; chart.Titles.Add("Sales By Employee"); chart.Titles[0].Font = new Font("Arial", 16f); chart.ChartAreas.Add(""); chart.ChartAreas[0].AxisX.Title = "Employee"; chart.ChartAreas[0].AxisY.Title = "Sales"; chart.ChartAreas[0].AxisX.TitleFont = new Font("Arial", 12f); chart.ChartAreas[0].AxisY.TitleFont = new Font("Arial", 12f); chart.ChartAreas[0].AxisX.LabelStyle.Font = new Font("Arial", 10f); chart.ChartAreas[0].AxisX.LabelStyle.Angle = -90; chart.ChartAreas[0].BackColor = Color.White; chart.Series.Add(""); chart.Series[0].ChartType = SeriesChartType.Column; foreach (var q in query) { var Name = q.Employee.FirstName + ' ' + q.Employee.LastName; chart.Series[0].Points.AddXY(Name, Convert.ToDouble(q.NoOfOrders)); } using (var chartimage = new MemoryStream()) { chart.SaveImage(chartimage, ChartImageFormat.Png); return chartimage.GetBuffer(); } }
I've put this in the Controller, hence the fact that the method is private. The LINQ query returns an anonymous type which contains Employee objects together with the total number of orders they have each generated. A Chart object is instantiated and some properties are set for rendering, including some fonts and labels. The resulting data from the LINQ query is bound to the chart using the AddXY() method. The chart is then saved to a MemoryStream object and then returned as an array of bytes. From there, it is displayed on the page:
The link displayed in the image above to "Get PDF" is generated by the following html:
<div><a href="Chart/GetPdf">Get PDF</a></div>
Using the same principal as with the Chart, the hyperlink points to a controller action: GetPdf():
public FilePathResult GetPdf() { var doc = new Document(); var pdf = Server.MapPath("PDF/Chart.pdf"); PdfWriter.GetInstance(doc, new FileStream(pdf, FileMode.Create)); doc.Open(); doc.Add(new Paragraph("Dashboard")); var image = Image.GetInstance(Chart()); image.ScalePercent(75f); doc.Add(image); doc.Close(); return File(pdf, "application/pdf", "Chart.pdf"); }
This action is very simple if you already have some familiarity with iTextSharp. If not, refer to the first in my iTextSharp series of articles, together with the article that covers working with images. The action creates a new iTextSharp Document object. A paragraph is added that simply says "Dashboard", and then the same byte array generated by the Chart() method is passed to an iTextsharp.itext.Image object. This is then reduced to 75 percent of its original size and added to the document. The Document.Close() method saves the resulting file to the location specified in the initial PdfWriter.GetInstance() call, and then it is returned through a FilePathResult class. Clicking the link generates an Open or Save dialogue box, and the complete PDF file:
A quick word about the usings that appear at the top of the controller code:
using System; using System.IO; using System.Linq; using System.Web.Mvc; using System.Web.UI.DataVisualization.Charting; using iTextSharp.text; using iTextSharp.text.pdf; using PDFCharting.Models; using Color = System.Drawing.Color; using Font = System.Drawing.Font;
System.Web.UI.DataVisualization.Charting is needed so that you can work with Chart objects. PDFCharting.Models references the Models area of the application which contains the LINQ to SQL classes, and the final two references are there to avoid namespace clashes. There are a number of objects within the iTextSharp component which are named the same as commonly found .Net classes, such as Image and Font. Typically, to avoid the compiler complaining of ambiguity, you might use the fully referenced class name in code. For example, System.Drawing.Font. However, as an alternative, I have provided a namespace alias so that I can reference .NET classes without having to add the fully qualified name.
Summary
We have seen that Charts are generated as images, and used two different derivatives of ActionResult to deliver them: FileContentResult to stream the binary content directly to the browser, and FilePathResult to return a file saved to disk. In addition, we learned the basics of binding a LINQ query result to the data points on a Chart. We have also seen how to add a byte array as an image to an iTextSharp PDF document, and finally learnt a bit about namespace aliases.
This is a very simple example that is intended just to illustrate a starting point. The Chart() method should not normally appear within the controller itself, even as a private method. Not unless your application is very simple. From the point of view of maintainability and extensibility, you might find K Scott Allen's ChartBuilder class a good place to start in terms of separating the grunt work into its own area. If you are feeling really adventurous, there is no reason why you couldn't use the concepts presented in the ChartBuilder class to create a similar utility for building PDF files.
|
https://www.mikesdotnetting.com/Article/115/Microsoft-Chart-Controls-to-PDF-with-iTextSharp-and-ASP.NET-MVC
|
CC-MAIN-2021-43
|
refinedweb
| 1,102
| 58.08
|
36291/what-is-the-standard-python-docstring-format
I have seen a few different styles of writing docstrings in Python, is there an official or "agreed-upon" style?
The Google style guide contains an excellent Python style guide. It includes conventions for readable docstring syntax that offers better guidance than PEP-257. For example:
def square_root(n):
"""Calculate the square root of a number.
Args:
n: the number to get the square root of.
Returns:
the square root of n.
Raises:
TypeError: if n is not a number.
ValueError: if n is negative.
"""
pass
I like to extend this to also include type information in the arguments, as described in this Sphinx documentation tutorial. For example:
def add_value(self, value):
"""Add a new value.
Args:
value (str): the value to add.
"""
pass
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Polymorphism is the ability to present the ...READ MORE
Use of period separator
e.g. product.prices <- c(12.01, ...READ MORE
python is general purpose programming language.it very ...READ MORE
The major difference is "size" includes NaN values, ...READ MORE
raw_input fuction is no longer available in ...READ MORE
You can find the details you want ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/36291/what-is-the-standard-python-docstring-format
|
CC-MAIN-2021-17
|
refinedweb
| 237
| 70.29
|
SYNOPSIS
stap [ OPTIONS ] FILENAME [ ARGUMENTS ]
stap [ OPTIONS ] - [ ARGUMENTS ]
stap [ OPTIONS ] -e SCRIPT [ ARGUMENTS ]
stap [ OPTIONS ] -l PROBE [ ARGUMENTS ]
stap [ OPTIONS ] -L PROBE [ ARGUMENTS ]
stap [ OPTIONS ] --dump-probe-types
stap [ OPTIONS ] --dump-probe-aliases
stap [ OPTIONS ] --dump-functions
DESCRIPTION
The stap program is the front-end to the Systemtap tool. It accepts probing instructions written in a simple domain-specific language, translates those instructions into C code, compiles this C code, and loads the resulting module into a running Linux kernel or a DynInst user-space mutator, to perform the requested system trace/probe functions. You can supply the script in a named file (FILENAME), from standard input (use - instead of FILENAME), or from the command line (using -e SCRIPT). The program runs until it is interrupted by the user, or if the script voluntarily invokes the exit() function, or by sufficient number of soft errors.
The language, which is described
systemtap comes with a variety of educational, documentation and reference resources. They come online and/or packaged for offline use. For online documentation, see the project web site,
OPTIONSThe.
- -t
- Collect timing information on the number of times probe executes and average amount of time spent in each probe-point. Also shows the derivation for each probe-point.
- -s NUM
-.
- .
- -L PROBE
- Similar to "-l", but list probe points and script-level local variables.
- -F
- Without -o option, load module and start probes, then detach from the module leaving the probes running. With -o option, run staprun in background as a daemon and show its pid.
- .
- -.
IP addresses may be IPv4 or IPv6 addresses.
If a particular IPv6 address is link local and exists on more than one interface, the intended interface may be specified by appending the address with a percent sign (%) followed by the intended interface name. For example, "fe80::5eff:35ff:fe07:55ca%eth0".
In order to specify a port number with an IPv6 address, it is necessary to enclose the IPv6 address in square brackets ([]) in order to separate the port number from the rest of the address. For example, "[fe80::5eff:35ff:fe07:55ca]:5000" or "[fe80::5eff:35ff:fe07:55ca%eth0]:5000".
If --use-server has not been specified, -pN has not been specified with N < 5, and the invoking user not root, is not a member of the group stapdev, but is a member of the group stapusr, then stap will automatically add --use-server to the options already specified.
- -, then --list-servers will fail to detect any online servers. In order for --list-servers to detect servers listening on IPv6 addresses, the avahi-daemon configuration file /etc/avahi/avahi-daemon.conf must contain an active "use-ipv6=yes" line. The service must be restarted after adding this line in order for IPv6 to be enabled.
- -.
- unix:PATH
- This mode connects to a UNIX socket. This can be used with a QEMU virtio-serial port for executing scripts inside a running virtual machine.
- direct://
- Special loopback mode to run on the local host.
- -. If nothing is specified, no limits are imposed.
- --rlimit-cpu=NUM
- Specify the CPU time limit, in seconds. If nothing is specified, no limits are imposed.
- --rlimit-nproc=NUM
- Specify the maximum number of processes that can be created. If nothing is specified, no limits are imposed.
- --rlimit-stack=NUM
- Specify the maximum size of the process stack, in bytes. If nothing is specified, no limits are imposed.
- --rlimit-fsize=NUM
- Specify the maximum size of files that the process may create, in bytes. If nothing is specified, no limits are imposed.
- -) and dyninst. See ALTERNATE RUNTIMES below for more information.
- --dyninst
- Shorthand for --runtime=dyninst.
- - j/k/ArrowDown/ArrowUp keys can be used to scroll through the probe list and the d,D/u,U/PageDown,End/PageUp,Home keys can be used to scroll through the module output.
ARGUMENTS
Any additional arguments on the command line are passed to the script parser for substitution. See below.
SCRIPT LANGUAGE
The systemtap script language resembles awk and C. There are two main outermost constructs: probes and functions. Within these, statements and expressions use C-like operator syntax and precedence.
GENERAL SYNTAXWhitespace is ignored. Three forms of comments are supported:
# ... shell style, to the end of line, except for $# and @#
// ... C++ style, to the end of line
/* ... C style ... */A. It can be also composed of many alternatives and conjunctions of CONDITIONs (meant as in previous sentence) using || and && respectively. However, parentheses are not supported yet, so remembering that conjunction takes precedence over alternative is important. (as named by the kernel build system ARCH/SUBARCH), then the second part is one of the two string comparison operators == or !=, and the third part is a string literal for matching it. This comparison is a wildcard (mis)match.
Similarly, if the first part is an identifier like CONFIG_something to refer to a kernel configuration option, then the second part is == or !=, and the third part is a string literal for matching the value (commonly "y" or "m"). Nonexistent or unset kernel configuration options are represented by the empty string. This comparison is also a wildcard (mis)match.
If the first part is the identifier systemtap_v, the test refers to the systemtap compatibility version, which may be overridden for old scripts with the --compatible flag. The comparison operator is as is for kernel_v and the right operand is a version string. See also the DEPRECATION section below.
If the first part is the identifier systemtap_privilege, the test refers to the privilege level that the systemtap script is compiled with. Here the second part is == or !=, and the third part is a string literal, either "stapusr" or "stapsys" or "stapdev"..
passed") {} %)
PREPROCESSOR MACROSThe preprocessor also supports a simple macro facility, run as a separate pass before conditional preprocessing.
Macros are defined using the following construct:
@define NAME %( BODY %) @define NAME(PARAM_1, PARAM_2, ...) %( BODY %)
Macros, and parameters inside a macro body, are both. Optionally, within the .stpm files, a public macro definition can be surrounded by a preprocessor conditional as described above.
VARIABLESIdentifiers functions and live as long as the entire systemtap session. There is one
namespace for all global variables, regardless of which script file
they are found within. Concurrent access to global variables is
automatically protected with locks, see the
SAFETY AND SECURITY
section for more details. A global declaration may be written at the
outermost level anywhere, not within a block of code. Global
variables which are written but never read will be displayed
automatically at session shutdown. The translator will
infer for each its value type, and if it is used as an array, its key
types. Optionally, scalar globals may be initialized with a string
or number literal. The following declaration marks variables as global.
global var1, var2, var3=4
Global variables can also be set as module options. One can do this by either using the -G option, or the module must first be compiled using stap -p4. Global variables can then be set on the command line when calling staprun on the module generated by stap -p4. See staprun(8) for more information..
STATEMENTSStatements enable procedural control flow. They may occur within functions and probe handlers. The total number of statements executed in response to any single probe event is limited to some number defined by the MAXACTION..System, string comparison or regex matching operators
- < > <= >= == != =~ !~
- ternary operator
- cond ? exp1 : exp2
- grouping operator
- ( exp )
- function call
- fn ([ arg1, arg2, ... ])
- array membership check
- exp in array
[exp1, exp2, ...] in array
[*, *, ... ]in array
REGULAR EXPRESSION MATCHINGThe scripting language supports regular expression matching. The basic syntax is as follows:
exp =~ regex exp !~ regex
(The first operand must be an expression evaluating to a string;.
PROBESThe) { [STMT ...] }
Events are specified in a special syntax called "probe points". There are several varieties of probe points defined by the translator, and tapset scripts may define further ones using aliases. Probe points may be wildcarded, grouped, or listed in preference sequences, or declared optional. More details on probe point syntax and semantics are listed on the stapprobes(3stap) manual page.
The probe handler is interpreted relative to the context of each event. For events associated with kernel code, this context may include variables defined in the source code at that spot. These ) { }. Please note that in each case, the statements in the alias handler block are treated ordinarily, so that variables assigned there constitute mere initialization, not a macro substitution.
An alias is used just like a built-in probe type.
probe syscall.read { printf("reading fd=%d\n", fildes) if (fildes > 10) tracethis = 1 }
FUNCTIONSSystem
There). optional precision specifier (not field width) determines the number of bytes to read - default is 1 byte. %10.4m prints 4 bytes of the memory in a 10-character-wide field.
- printf("%#o %#x %#X\n", 1, 2, 3) Prints: 01 0x2 0X3 printf("%#c %#c %#c\n", 0, 9, 42) Prints: \000 \t *
STATISTICS. Arrays containing aggregates may be sorted and iterated. See the foreach construct above.
probe timer.profile { x[1] <<< pid() x[2] <<< uid() y <<< tid() } global x // an array containing aggregates global y // a scalar probe end { foreach ([i] in x @count+) { printf ("x[%d]: avg %d = sum %d / count %d\n", i, @avg(x[i]), @sum(x[i]), @count(x[i])) println (@hist_log(x[i])) } println ("y:") println (@hist_log(y)) }
TYPECASTINGOnce a pointer (see the CONTEXT VARIABLES section of stapprobes(3stap)) has been saved into a script integer variable, the translator loses the type information necessary to access members from that pointer.: the same dereferencing operator -> is used to refer to both direct containment or pointer indirection. Systemtap automatically determines which. The optional module tells the translator where to look for information about that type. Multiple modules may be specified as a list with : separators. If the module is not specified, it will default either to the probe module for dwarf probes, or to "kernel" for functions and all other probes types.
The translator can create its own module with type information from a header
surrounded by angle brackets, in case normal debuginfo is not available. For
kernel headers, prefix it with "kernel" to use the appropriate build system.
All other headers are built with default GCC parameters into a user module.
Multiple headers may be specified in sequence to resolve a codependency.
@cast(tv, "timeval", "<sys/time.h>")->tv_sec @cast(task, "task_struct", "kernel<linux/sched.h>")->tgid @cast(task, "task_struct", "kernel<linux/sched.h><linux/fs_struct.h>")->fs->umask
Values acquired by @cast may be pretty-printed by the $ and $$ suffix operators, the same way as described in the CONTEXT VARIABLES section of the stapprobes(3stap) manual page.
When in guru mode, the translator will also allow scripts to assign new values to members of typecasted pointers.
Typecasting is also useful in the case of
void*
members whose type may be determinable at runtime.
probe foo { if ($var->type == 1) { value = @cast($var->data, "type1")->bar } else { value = @cast($var->data, "type2")->baz } print(value) }
EMBEDDED C.
Another macros
STAP_ARG_*
and
STAP_RETVALUE. (val) %{ strlcpy (STAP_RETVALUE, STAP_ARG_val, MAXSTRINGLEN); strlcat (STAP_RETVALUE, "one", MAXSTRINGLEN); if (strcmp (STAP_RETVALUE, "three-two-one")) STAP_RETURN("parameter should be three-two-"); %} function no_ops () %{ STAP_RETURN(); /* function inferred with no return value */ %}
The function argument and return value types have to be inferred by the translator from the call sites in order for this to work. The user should examine C code generated for ordinary script-language functions in order to write compatible embedded-C ones.
The last place where embedded code is permitted is as an expression rvalue.
In this case, the C code enclosed between
%{ and %}
markers is interpreted as an ordinary expression value. It is assumed
to be a normal 64-bit signed number, unless the marker
/* string */
is included, in which case it's treated as a string.
function add_one (val) { return val + %{ 1 %} } function add_string_two (val) { return val . %{ /* string */ "two" %} }
The embedded-C code may contain markers to assert optimization and safety properties.
- /*.
- /* myproc-unprivileged */
- means that the C code is so safe that even unprivileged users are permitted to use it, provided that the target of the current probe is within the user's own process.
- /* guru */
- means that the C code is so unsafe that a systemtap user must specify -g (guru mode) to use this.
- /*A set of builtin probe point aliases are provided by the scripts installed in the directory specified in the stappaths(7) manual page. The functions are described in the stapprobes(3stap) manual page.
PROCESSINGThe script files. If any tapset script file is selected because it defines an unresolved symbol, then the entirety of that file is added to the translator's resolution queue. This process iterates until all symbols are resolved and a subset of tapset script files $context (except error-handling probes), and terminate the session. Finally, staprun unloads the module, and cleans up.
ABNORMAL TERMINATION
One should avoid killing the stap process forcibly, for example with SIGKILL, because the stapio process (a child process of the stap process) and the loaded module may be left running on the system. If this happens, send SIGTERM or SIGINT to any remaining stapio processes, then use rmmod to unload the systemtap module.
EXAMPLESSee the stapex(3stap) manual page for a brief collection of samples, or a large set of installed samples under the systemtap documentation/testsuite directories. See stappaths(7stap) for the likely location of these on the system.
CACHINGThe.
System
For.
The root user or a user who is a member of both the stapdev and stapusr groups can build and run any systemtap script.
A user who is a member of both the stapsys and stapusr groups can only use pre-built modules under the following conditions:
- The module has been signed by a trusted signer. Trusted signers are normally systemtap compile-servers which sign modules when the --privilege option is specified by the client. See the stap-server(8) manual page for more information.
- The module was built using the --privilege=stapsys or the --privilege=stapusr options.
Members of only the stapusr group can only use pre-built modules under the following conditions:
- The module is located in the /lib/modules/VERSION/systemtap directory. This directory must be owned by root and not be world writable.
or
- The module has been signed by a trusted signer. Trusted signers are normally systemtap compile-servers which sign modules when the --privilege option is specified by the client. See the stap-server(8) manual page for more information.
- The module was built using the FI--privilege=stapusr option..
SECUREBOOT
If:
RESOURCE LIMITS automatically.
-.
With scripts that contain probes on any interrupt path, it is possible that those interrupts may occur in the middle of another probe handler. The probe in the interrupt handler would be skipped in this case to avoid reentrance. To work around this issue, execute stap with the option -DINTERRUPTIBLE=0 to mask interrupts throughout the probe handler. This does add some extra overhead to the probes, but it may prevent reentrance for common problem cases. However, probes in NMI handlers and in the callpath of the stap runtime may still be skipped due to reentrance.
In case something goes wrong with stap or staprun after a probe has already started running, one may safely kill both user processes, and remove the active probe kernel module with rmmod. Any pending trace messages may be lost.
UNPRIVILEGED USERS
Systemtap exposes kernel internal data structures and potentially private user information. Because of this, use of systemtap's full capabilities are restricted to root and to users who are members of the groups stapdev and stapusr.
However, a restricted set of systemtap's features can be made available to trusted, unprivileged users. These users are members of the group stapusr only, or members of the groups stapusr and stapsys. These users can load systemtap modules which have been compiled and certified by a trusted systemtap compile-server. See the descriptions of the options --privilege and --use-server. See README.unprivileged in the systemtap source code for information about setting up a trusted compile server.
The restrictions enforced when --privilege=stapsys is specified are designed to prevent unprivileged users from:
-A member of the groups stapusr and stapsys may use all probe points.
A member of only the group stapusr may use only the following probes:
SCRIPT LANGUAGE RESTRICTIONSThe following scripting language features are unavailable to all unprivileged users:
- any feature enabled by the Guru Mode (-g) option.
- embedded C code.
RUNTIME RESTRICTIONSThe following runtime restrictions are placed upon all unprivileged users:
- Only the default runtime code (see -R) may be used.
Additional restrictions are placed on members of only the group stapusr:
- Probing of processes owned by other users is not permitted.
- Access of kernel memory (read and write) is not permitted.
COMMAND LINE OPTION RESTRICTIONSSome command line options provide access to features which must not be available to all unprivileged users:
- •
- -g may not be specified.
- •
- The following options may not be used by the compile-server client:
-a, -B, -D, -I, -r, -R
ENVIRONMENT RESTRICTIONSThe following environment variables must not be set for all unprivileged users:
SYSTEMTAP_RUNTIME SYSTEMTAP_TAPSET SYSTEMTAP_DEBUGINFO_PATH
TAPSET RESTRICTIONSIn general, tapset functions are only available for members of the group stapusr when they do not gather information that an ordinary program running with that user's privileges would be denied access to.
There are two categories of unprivileged tapset functions. The first
category consists of utility functions that are unconditionally
available to all users; these include such things as:
cpu:long () exit () str_replace:string (prnt_str:string, srch_str:string, rplc_str:string)
The second category consists of so-called
myproc-unprivileged
functions that can only gather information within their own
processes. Scripts that wish to use these functions must test the
result of the tapset function is_myproc and only call these
functions if the result is 1. The script will exit immediately if any
of these functions are called by an unprivileged user within a probe
within a process which is not owned by that user. Examples of
myproc-unprivileged
functions include:
print_usyms (stk:string) user_int:long (addr:long) usymname:string (addr:long)
A compile error is triggered when any function not in either of the above categories is used by members of only the group stapusr.
No other built-in tapset functions may be used by members of only the group stapusr.
ALTERNATE RUNTIMES
EXIT STATUS
The systemtap translator generally returns with a success code of 0 if the requested script was processed and executed successfully through the requested pass. Otherwise, errors may be printed to stderr and a failure code is returned. Use -v or -vp N to increase (global or per-pass) verbosity to identify the source of the trouble.
In listings mode (-l and -L), error messages are normally suppressed. A success code of 0 is returned if at least one matching probe was found.
A script executing in pass 5 that is interrupted with ^C / SIGINT is considered to be successful.
DEPRECATION
Over time, some features of the script language and the tapset library may undergo incompatible changes, so that a script written against an old version of systemtap may no longer run. In these cases, it may help to run systemtap with the --compatible VERSION flag, specifying the last known working version.
- Important files and their corresponding paths can be located in the
- stappaths (7) manual page.
BUGSUse the Bugzilla link of the project web page or our mailing list., <[email protected]>.
error::reporting(7stap),
|
http://manpages.org/stap
|
CC-MAIN-2017-22
|
refinedweb
| 3,251
| 54.93
|
Do you use Twitter? If so, then you must come across some of the bots that like, retweet, follow, or even reply to your tweets. But have you ever wondered how they are made? Well, it's easy as filling water in a bottle. Haha! It's really not rocket science. So, let's get started and make a bot.
In Python, the twitter bot is just a few lines of code, less than 30.
Prerequisites for making one (Bot)
- tweepy module in Python. - A twitter account, which you want to make a bot. - Twitter developer account.
Applying to the twitter developer account
To apply for the developer account on twitter, follow these steps:
Make sure you've logged in to your Twitter account on which you want to make a bot.
Here, I'm using my new account BashWoman to make a bot, which will like, and retweet the hashtag #python3.
- Click on apply, after doing that this type of screen will show up.
Select, apply for a developer account.
- After this, you will get a number of options, why you want to apply for the developer account, here we are making a bot, so I will select Making a bot.
- Now, on the next page, you have to fill some details. Do that.
- Twitter will ask you some questions related to how you would use this account and the twitter data. We are just making this bot to like and retweet the posts so, select that only.
And
Else select no, just to keep it simple.
Enter all the details you'd do with this bot.
- After filling all the details, you'll get an agreement page. Just accept all the terms and conditions. And then click on Submit application.
- You will get a confirmation mail. Once you confirm that, a new window will open like this.
Click Get keys.
- After this, what we wanted by this developer account is the keys. Save them somewhere, you'll need them soon.
Let's Code and understand it
You see there are no more than 30 lines in Python. Let's understand each and every line.
import tweepy import time
To communicate with Twitter API, we need some module, here we are using tweepy. You can install it easily.
pip install tweepy
Once you install the module, write some more code.
# Authenticate to Twitter CONSUMER_KEY = '<your-consumer-or-API-key-goes-here>' CONSUMER_SECRET = '<your-consumer-or-API-secret-goes-here>' ACCESS_KEY = '<your-access-key-goes-here>' ACESS_SECRET = '<your-access-secret-goes-here>' auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_KEY, ACESS_SECRET)
This is used to authenticate your Twitter account. Remember these keys are of your account, don't share them to anyone, else they can access your data. That's why I have made some variables in which I will store the keys.
These keys will be found in your developer account, which you've saved a time ago.
auth variable is created to authenticate the account, Twitter uses OAuth to do this.
And, after that, we will set the tokens.
# Create API object api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
This class provides a wrapper for the API as provided by Twitter. If you stuck somewhere, you can always refer to the tweepy documentation.
user = api.me() search = '#python3' numTweet = 500 for tweet in tweepy.Cursor(api.search, search).items(numTweet): try: print('Tweet Liked') tweet.favorite() print("Retweet done") tweet.retweet() time.sleep(10) except tweepy.TweepError as e: print(e.reason) except StopIteration: break
Finally, we will tell the program to look for the keyword #python3 in a tweet and the number of tweets that will be processed once in a day. If you want to like, you can use tweepy.favorite() , and for retweet tweepy.retweet().
The reason, I'm using sleep is, twitter has some guidelines, you must follow otherwise, your account will be restricted. There is a limit for liking the number of tweets. If it gives some error, we can use tweepy.TweepError so that we know, what went wrong.
Now, it's time for the deployment. You can use any platform, I have used Render.
After creating an account on this, create a cron job, you can schedule the time, I prefer about 10 to 15 mins. It means your bot will run every 10 to 15 mins so that it won't violate the Twitter guidelines and your account will be safe and not get restricted.
Here's my bot.
It's time to build your own bot.
All the best.
Top comments (11)
Hello Seema, loved your explanation 🔥
I hope you are also aware of using
Streaminstead of
Cursorin your approach to get the tweets containing the particular hashtag. It would be very helpful for beginner level readers to understand the differences and apply them accordingly. But this is cool. Your line-by-line explanation is 👌🏻👌🏻
Thank you so much.
This has gotten my bot project further than anything else I have done to date. I am a dilettante in Python and am making a twitter bot to practice.
I am stuck on how to reply to a tweet with one of a set of messages (I think I got random.choice to work). My scheme is to reply to tweets based off keywords.
Thanks so much for this work. I love watching my code actually work!!!
I'm glad it helped.
Make one post on OAuth2 authentication from scratch.
By scratch I mean with just using requests library.
Sure.
Very interesting. I was looking for something like this.
I hope this will be helpful for you.
Thanks a lot! I was looking for something like this for a long time. I really wanna try it out 😃
Hello,
first of all great explanation but i was wondering if there is any way to ,make a bot to retweet specific user's tweets?
thanks
|
https://dev.to/seema1711/making-a-twitter-bot-with-python-3ld7
|
CC-MAIN-2022-40
|
refinedweb
| 984
| 77.03
|
Hello all, I am needing to calculate the amounts of the values stored in the arrays within a function. The calculations within the function however have to be done using pointers. I am having trouble figuring out how to return the value calculated and display it within main. This is what I have so far, any help would be great.
Code:#include <stdio.h> double extend (double [], double [], double []); int main () { #define max 10 double price [max] = {10.62, 14.89, 13.21, 16.55, 19.62, 9.47, 6.58, 19.32, 12.15, 3.99}; double quantity [max] = {4, 9.5,6, 7.35, 9, 15.3, 3, 5.4, 2.9, 4.9}; double amount [max]; int i; for (i=0; i<max; i++){ printf ("The amount is %f\n", extend(price, quantity, amount)); } system ("PAUSE"); return 0; } double extend (double price[], double quantity[], double amount []) { int i; double *gPtr, *mPtr, *nPtr; gPtr= price; mPtr= quantity; nPtr= amount; for (i=0; i<max; i++){ *nPtr++= *mPtr++ * *gPtr++; } return (amount); }
|
http://cboard.cprogramming.com/c-programming/152459-returning-results-function-using-pointers.html
|
CC-MAIN-2015-06
|
refinedweb
| 172
| 71.85
|
Microsoft found at this GitHub repository.
DO NOT USE BICEP ON YOUR PRODUCTION UNTIL IT GOES TO V0.3
ARM Template Skeleton Structure
ARM template is a JSON file that follows a specific format. A basic template looks like:
Those
parameters,
variables,
resources and
outputs attributes are as nearly as mandatory, so Bicep supports those attributes first. Due to this JSON syntax, there have been several issues for authoring.
- First, it's easy to incorrectly write the JSON template once the resource goes complex.
- Second, due to this complex structure, the readability gets dramatically reduced.
- Third but not the last,
parameters,
variables,
resourcesand
outputsMUST be defined within their declared sections, respectively, which is not as flexible as other programming languages.
But Bicep has successfully sorted out those issues. Let's have a look how Bicep copes with it.
Bicep Parameters
Parameters in Bicep can be declared like below. Every attribute is optional, by the way.).
On the other hands, to use
secure or
allowed attribute, the parameter should be declared as an object form (line #3-5).
Note that the parameter declaration is all about what type of value we will accept from outside, not what the actual value will be. Therefore, it doesn't use the equal sign (
=) for the parameter declaration, except assigning its default value.
Bicep Variables).
Note that, unlike the parameters, the variables use the equal sign (
=) because it assigns a value to the variable.
Bicep Resources
Bicep declares resources like below.
There are several things worth noting.
- The format to declare a resource is similar to the parameter. The parameter declaration looks like
param <identifier> <type>. The resource declaration looks similar to
resource <identifier> <type>.
- However, the resource type section is way different from the parameter declaration. It has a definite format of
<resource namespace>/<resource type>@<resource API version>(line #1).
- I prefer to using the
providers()function as I can't remember all the API versions for each resource.
- But using the
providers()function is NOT recommended. Instead, the API version should be explicitly declared. To find out the latest API version, use the following PowerShell command.
- Bicep automatically resolves the dependencies between resources by using the resource identifier.
- For example, the Storage Account resource is declared first as
st, then
stis referred within the Virtual Machine declaration (line #19).
- On the other hands, in the ARM template world, we should explicitly declare resources within the
dependsOnattribute to define dependencies.
You might have found an interesting fact while authoring Bicep file.
- ARM templates should rigorously follow the schema to declare parameters, variables and resources. Outside its respective section, we can't declare them.
- On the other hands, Bicep is much more flexible with regards to the location where to declare parameters, variables and resources.
- We can simply declare
param,
varand
resourcewherever you like, within the Bicep file, then it's automagically sorted out during the build time.
Bicep Installation and Usage.
Rough Comparison to ARM Template Authoring!
Discussion (2)
Line numbers mentioned under the section
Bicep Parametersare off, making it hard to understand.
@dexterposh Thanks for the comment! Yeah, I totally agree. Unfortunately, it's not something I can handle on my end because dev.to doesn't provide a way to highlight lines. At least we've got the line number at the left, though.
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/azure/project-bicep-sneak-peek-the-arm-template-dsl-3np0
|
CC-MAIN-2021-17
|
refinedweb
| 553
| 50.33
|
- certainly a WTF, and no, it's not clear code.
Admin
[[Frist, 0, 0], [0, Frist, 0], [0, 0, Frist]]
Admin
I don't understand the specification. An example where n=4, but n is still used in the example?
Admin
Hmmm.. the "case 1" and "case n" parts could go before or after the loop, and the loop would be from 2 to n-1. So no, in this case (pun wasn't intended, but is now) it's not necessary, but could be otherwise.
Admin
The point of the for/case antipattern is that it does something completely different on each iteration. It isn't simply "any case statement inside a loop", and anyone who thinks that's inherently bad without further context needs their head examined.
Admin
As above, this is surely not a WTF.
The "worse" challenge is too easy. Anything can be implemented badly given a choice of language and limited requirements!
I suspect TRWTF is the reason for these requirements - whatever's using the output should probably have the logic in it - but we'll never know.
Admin
Okay, now how to do it better (python with numpy):
import numpy as np
n = 4
e_1 = np.zeros(n) e_1[0] = 1
e_n = np.zeros(n) e_n[n-1] = 1
n_1 = np.identity(n)[:, 1:]
n_n = np.identity(n)[:, :-1]
Admin
stupid formatting, now with line breaks:
import numpy as np
n = 4
e_1 = np.zeros(n)
e_1[0] = 1
e_n = np.zeros(n)
e_n[n-1] = 1
n_1 = np.identity(n)[:, 1:]
n_n = np.identity(n)[:, :-1]
Admin
I don't know the exact syntax, but you can simply unroll the for-case-loop and be happy:
instead of:
you can do:
there, better. The same thing. People are just weirdly afraid of for-loops counting from 2 to n-1 instead of from 1 to n. There is nothing wrong with counting from 2 to n-1.
Admin
You can actually even replace the somewhat cumbersome n_1 = np.identity(n)[:, 1:] n_n = np.identity(n)[:, :-1] with n_1 = np.eye(n, n-1, -1) n_n = np.eye(n, n-1)
in python with numpy. If you do matrix math, just use a proper library with batteries included.
Admin
You only need two lines of code with no for loop to take care of the two smaller arrays. From there the problem is simplified to a single small for loop with no switch/select.
Public Sub DefineProjectionArrays(ByVal n As Long, ByRef e_1 As Variant, ByRef e_n As Variant, ByRef n_1 As Variant, ByRef n_n As Variant) Dim i As Long, j As Long
End Sub
Admin
Uh, that 2 to n-1 idea is exactly what I wrote.
Admin
TRWTF is that if your going to be using matrix math, you should be using a tool designed specifically for matrix math. Like Matlab. Sure, you can do a lot of Engineering with Excel, but use the proper tool for the job please.
Addendum 2017-05-10 08:36: *you're. damnit.
Admin
def f(n): return (lambda x,y: ([1]+x,x+[1],[[0]*n]+y,y+[[0]n]))([0](n-1),[[int(x==y) for x in range(n)] for y in range(n)])much clearer
Admin
addendum: the comment formatter made *'s into emphasis /inside/ a code tag. there should be a manual for commenting here ...
Admin
Yes, there should. Maybe markdown syntax will do the trick? Let's try...
Everything Is A Python Oneliner:
e_1,n_1,e_n,n_n=[map(lambda x:map(int,x[slice(*s)]),[str(10**n+10**i)[1:] for i in reversed(range(n))]) for s in ([-1,None],[1,None],[None,1],[None,-1])]
Admin
MATLAB:
Admin
Another Matlab way is to
n_n=flipud(fliplr(n_1))
Admin
If you know you want 4 arrays filled with the specified values, why bother generating it?
Why not just statically declare them?
WTF?
Admin
Like. Same idea I had.
Admin
Because the sizes are dynamic. n is passed in as a parameter.
Admin
Agree, this is trying to solve a non-problem. Just type out the definition fucking try hards. Probably has fastest computation speed too.
Admin
What I see as the quintessential for-case antipattern is:
select case i
In this case, the fact case else contains a significant amount of code (compared to the rest of the block) and uses the iteration variable means it's not for-case.
Admin
Ah...missed that...thanks :)
Quite frankly, I'm not recoiling in horror from the original code. Seems legit.
Admin
That was certainly my first thought, Dan... why the hell do you need a loop to deal with the one value in each of the single-dimension arrays?
Your code is what I would have written, and seems more intuitive to me as a result.
I did wonder why the smaller arrays couldn't just point to the appropriate column (or row) of the square arrays, though...
Admin
Watch out for overshooting the end of that array ;)
Admin
I kinda prefer a threesome of arrays. A little easier to populate...
Admin
I'm with the guys who point out that the real problem with the solution isn't the "anti-pattern:" it's a failure to understand the abstractions involved. (Which can often lead to an anti-pattern, I suppose.)
The first thing I thought when I saw the "documenting comment" was, oh boy, I wonder what the requirement is. I mean, there must be some reason to need these data structures. And there's every reason to believe that they are part of a bigger requirement, which may not yet be obvious, but will no doubt develop in time.
The second thing I thought was "Oh, look, it's sort of an identity matrix, except it isn't quite." This leads to the numpy solution given above, because the clearest and most maintainable solution is to create an identity matrix of the appropriate dimension and then to use matrix manipulation to produce the desired result. That way you are operating on actual matrixes, not on some arbitrary loop of ones and zeros.
The slight problem with using either numpy or Matlab is that, well, this is a VB.Net environment, so those don't really help that much. However, if you google/stack-overflow for .NET matrix libraries, you should be able to find a reasonable fit. (I found several.)
Admin
I desperately want to show some APL but I've not used it for 30 years.
Admin
Taken your comment to the logical conclusion then is not to create the matrix directly but in say, MATLAB, build the matrix with
maketform('projective',...)or
fitgeotransor whatever, in which case the matrix is treated as an opaque operator. I find that a lot of programmers, versus engineers and scientists, have a knee-jerk reaction against this level of abstraction, in part because it requires a specialized environment that has as many of the tools you need as possible, Matlab or SciPy. But also because of the large level of disconnect between what they perceive as programming. (For example, Matlab matrices are not arrays, they're complex copy-on-write structures that are completely opaque but make a lot of operations, like transposing, O(1)). Ultimately, though it is the right tool for the majority of engineering-heavy small-scale problems.
Admin
I'd put it more simply in this case. 99% of programmers are not comfortable with matrix arithmetic. (I'm not particularly comfortable, for example.)
But my basic point is, if you are working in a domain where matrix transformations apply, then you are doing everybody else a disservice by not using matrix arithmetic in some way.
I mean, let's assume the product or project behind this thing uses eigenvectors or linear algorithms or any number of other things that are based upon matrix transforms. And let's assume that there are five or ten VB.Net programmers out there, each charged with a sub-task involving the implementation of one small part of what is essentially one, properly abstracted to the correct level, domain.
It isn't actually important that this one is an anti-pattern that obfuscates the abstractions inherent in the domain. It's important that there are five or ten other VB.Net programmers (in this case, the language is irrelevant) who are going to come up with five or ten completely separate and incompatible anti-patterns.
This is why I despise "clean code" freaks. Don't tackle the problem at the lowest level possible. Tackle it at the appropriate level, which is probably design and possible even architecture.
Rant over.
Admin
For "linear algorithms" please read "linear algebra." Sorry 'bout that.
Admin
You just need to create a nxn identity matrix and slice it properly... like, in python (I don't know VBA, but this shouldn't be that hard...):
tmp = numpy.identity(n)
e_1 = tmp[:,0] e_n = tmp[:,n-1] n_1 = tmp[:,1:n-1] n_n = tmp[:,0:n-2]
Admin
I'm not sure that this counts an example of the "for-case" antipattern. That antipattern usually implies that the loop has a fixed (small) number of iterations, and it does not save any effort at all, i.e. that you could fully unroll the loop and have the code be no more verbose than before. In other words, my understanding of the "for-case" antipattern is that it's only applicable for problems where a loop is unnecessary in the first place. That said, I can't remember the last time where a "for-case" was useful for anything other than processing some stream of data coming in from outside the process (e.g. event handlers, simple command line option parsing).
Anyway, I'm not really sure what the reason is for interleaving the definitions of the different variables in this example. I've hardly used any form of Visual Basic (and when I did it was over a decade ago), but aside from possible syntax errors, my "better" answer would have been something like the below. (The comments assume that these arrays are matrices for some linear algebra problem, which seems like a reasonable guess based on the function name, and the pattern mentioned in the comments.)
The only duplication here is due to using two for loops, which you could easily combine if that really bothered you enough.
Admin
e_1 = NewArray( n, 1, 0# ): e_n = NewArray( n, 1, 0# )
n_1 = NewArray( n, n-1, 0# ): n_n = NewArray( n, n-1, 0# )
e_1( 1, 1 ) = 1#
e_n( n, 1 ) = 1#
For i = 1 To ( n - 1 )
Next i
Seriously. I also sent this in via the link with my stuff attached just in case I win, but this is why that is a bad pattern. TRWTFs: Doing things in loops that need to be done exactly once, Doing what is clearly going to be Matrix Math without a matrix library, thinking that there isn't a better solution, using variables to reference things that will never move (See their method for e_n using (i , 1) instead of just (1,1)),
Anything that can be done with For-Case can be done in other, better ways. There are times when it get shoehorned into an existing system because it's easy as heck to add and relatively easy to understand, but that doesn't make it good. If I'd been asked to review this code and it was in production, I would likely not have given more than a sideways glance. It isn't the worst anti-pattern in this case, but it is NOT good.
Admin
Site admin definitely needs to upgrade the Captcha - too many adbot intrusions.
Admin
Yeah, adbots with hernias, no less. (Check the domains in the links.)
Admin
That was fun:
[code] /* allocate all 0's / arr = calloc(nn-1, sizeof(int));
/* set appropriate elements to 1 / for(int i=1; i<n; i++){ arr[in-1] = 1; }
/* set the variables */ e_1 = n_n = &arr[n-1]; e_n = n_1 = &arr[0]; [\code]
Pure evil, but It Works! The best part is that it looks like it might be reasonable (and the author would crap on any objections because it's "so creative and elegant").
Admin
I was working in PHP today, so thought I'd do a little practice.
My rough, quick answer: function DefineProjectionArrays($n, &$e_1, &$e_n, &$n_1, &$n_n) { if ($n < 2) { return; } $e_1 = $e_n = array_fill(0, $n, array_fill(0, 1, 0)); $n_1 = $n_n = array_fill(0, $n, array_fill(0, $n - 1, 0));
}
And some creatively worse solutions: function DefineWorseProjectionArrays($n, &$e_1, &$e_n, &$n_1, &$n_n) { $e_1 = $e_n = $n_1 = $n_n = array();
}
function DefineEvenWorseProjectionArrays($n, &$e_1, &$e_n, &$n_1, &$n_n) { $e_1 = $e_n = $n_1 = $n_n = array();
}
Admin
The real WTF is that this code fails for n = 1. If my mental VB interpreter is correct, this code returns [1], [0], [[]], [[]] instead of [1], [1], [[]], [[]].
Admin
I like the fact that this example is written in an old enough version of VB to preclude the option of declaring 4 arrays with initializers. These arrays are small enough that they don't really need code to initialize them.
Admin
Nowhere near Pythonic enough. Plus, e_1 and e_n are supposed to be two-dimensional. You want this:
|
https://thedailywtf.com/articles/comments/a-foursome-of-arrays/?parent=476754
|
CC-MAIN-2018-51
|
refinedweb
| 2,218
| 63.09
|
Xcode Version: 10.0 beta 6 (10L232m)
I'm getting an error instantiating a CLPlacemark with this constructor:
The error message is the following:
`Type of expression is ambiguous without more context`
This was working fine using the beta 5 build of Xcode. I notice the headers no longer include this constructor, but the documentation does not indicate it was deprecated or removed.
Any ideas?
Update: Looks like the constructor is included in the Intents framework (which is imported in the Swift file).
Hey there,
I just hit the same problem.
Can you tell me how to fix it? I've imported CoreLocation as well as Intents but still see the error
Cheers,
Georg
Just add
import Contacts
Initializer can not resolve address parameter type CNPostalAddress and gives this silly error message.
|
https://forums.developer.apple.com/message/328847?tstart=0
|
CC-MAIN-2019-47
|
refinedweb
| 132
| 52.8
|
Building Blazor shared pager component
I have already implemented and then blogged about Blazor pager component and shared Blazor components. This post gathers my previous works and demonstrates how to build Blazor shared pager component we can use in multiple projects.
Let’s start with shared Blazor component I introduced in my blog post Building Blazor shared components. For the end of this blog post I had clean out-of-box shared component with dummy content and no code.
For pager component I removed content folder as I consider pager styling to be responsibility of Blazor application.
This is the starting point – bare minimal shared Blazor component. Let’s go with new project called BlazorLibrary.Components and clean it up same way as shown on image above. Notice that I changed extension of Component1.cshtml to .razor.
Building shared Blazor pager component
Before going to actual code let’s try rename Component1.razor to Pager.razor. Pager needs to know class that carries paging information. Let’s define two classes: PagedResult<T> and PagedResultBase – almost same ones I used with pager view component for ASP.NET Core.
public abstract class PagedResultBase
{
public int CurrentPage { get; set; }
public int PageCount { get; set; }
public int PageSize { get; set; }
public int RowCount { get; set; }
}
public class PagedResult<T> : PagedResultBase where T : class
{
public IList<T> Results { get; set; }
public PagedResult()
{
Results = new List<T>();
}
}
You can find out more about paging .NET Core from my blog posts Paging with Entity Framework Core and Returning paged results from repositories using PagedResult,
NB! It’s arhitectural decision where to keep these classes. For demo purposes it’s okay to keep these in Blazor shared component project. In practice these classes are used also by other projects and other layers (shared data clients, Entity Framework database context etc). If you build something real then make sure you define these classes in correct layer.
Here is the content for Pager.razor file. Don’t worry about PagerModel class right now as we take care of this at next step.
@inherits PagerModel
@if (Result != null)
{
<div class="row">
<div class="col-md-8 col-sm-8">
@if (Result.PageCount > 1)
{
<ul class="pagination pull-right">
<li><button type="button" onclick="@(() => PagerButtonClicked(1))" class="btn">«</button></li>
@for (var i = StartIndex; i <= FinishIndex; i++)
{
var currentIndex = i;
@if (i == Result.CurrentPage)
{
<li><span class="btn btn-primary">@i</span></li>
}
else
{
<li><button type="button" onclick="@(() => PagerButtonClicked(currentIndex))" class="btn">@i</button></li>
}
}
<li><button type="button" onclick="@(() => PagerButtonClicked(Result.PageCount))" class="btn">»</button></li>
</ul>
}
</div>
</div>
}
Although most of Blazor examples you see in internet have page code in same file inside @functions block, you can still use code-behind files to keep code and presentation separated. You can find out more about this in from blog post Separating code and presentation of Blazor pages.
Let’s add new C# class to project and name it as Pager.razor.cs. Take a look at Solution Explorer – the new file is located nicely under Pager.razor like. Here is code for PagerModel class.
public class PagerModel : ComponentBase
{
);
}
}
Now we can build a project and we are done with our pager shared component.
Using Blazor shared pager component in other projects
Using shared pager component is easy. In project where we want to use pager component we have to add reference to pager library. After this we can add pager to some Blazor page.
Here is the example from my Blazor demo project in Github (BlazorLibrary.AspNetCoreHostedBlazor.Client/Pages/Index.razor).
<Pager Result=@Books PageChanged=@PagerPageChanged />
@Books is paged result with books and @PagerPageChanged is event that is fired when user selects another page in pager. This is how pager looks in my Blazor demos.
If you need different markup and styling for pager then feel free to change Pager.razor file to better reflect your needs.
Wrapping up
Although most of this post is well covered in my previous writings it is still good example how to build something useful on Blazor already today. Although shared components support is not official when writing this post I’m sure it will come to Blazor soon. Pager component is useful as it is needed in almost all application that display tables or lists of data to users. Now this cool component is in its own library and we can share it between projects.
|
https://gunnarpeipman.com/blazor-shared-pager/
|
CC-MAIN-2022-40
|
refinedweb
| 733
| 64.3
|
This section illustrates you how to use the BreakIterator to parse a string on a line break.
Parsing a text is a common task. Through the use of BreakIterator class, you can perform parsing in a language-independent manner as it is locale sensitive. This class provides factory methods to create instances of various types of break iterators. Here we are going to break a text string on the basis of line.
You can see in the given example, we have invoked the factory method getLineInstance() of BreakIterator class and passed a string 'Hello\nWorld' to the setText() method. The method getLineInstance() create BreakIterator for line-breaks using default locale and setText() method set the text string to be scanned. Then, we have created a loop to find the location of new line character (\n) from the string and break the text from there into different lines.
current(): This method of BreakIterator class return character index of the text boundary that was most recently returned.
next(): This method of BreakIterator class return the boundary following the current boundary.
Here is the code:
import java.text.*; public class LineBreak { public static void main(String[] args) { String str = "", st = "Hello\nWorld"; BreakIterator bi = BreakIterator.getLineInstance(); bi.setText(st); int index = 0; while (bi.next() != BreakIterator.DONE) { str = st.substring(index, bi.current()); System.out.println(str); index = bi.current(); } } }
Output:
|
http://www.roseindia.net/tutorial/java/corejava/javatext/lineBreak.html
|
CC-MAIN-2014-52
|
refinedweb
| 228
| 57.27
|
Stores runtime type information. More...
#include <Inventor/SoType.h>
Stores runtime type information.
The SoType class keeps track of runtime type information in Open Inventor. Each type is associated with a given name, so lookup is possible in either direction.
Many Open Inventor classes request a unique SoType when they are initialized. This type can then be used to find out the actual class of an instance when only its base class is known, or to obtain an instance of a particular class given its type or name.
Note that the names associated with types of Open Inventor classes do not contain the "So" prefix.
SoAction, SoBase, SoDetail, SoError, SoEvent, SoField
Returns an always-illegal type.
Useful for returning errors.
Some types are able to create instances; for example, most nodes and engines (those which are not abstract classes) can be created this way.
This method returns TRUE if the type supports such creation.
Creates and returns a pointer to an instance of the type.
Returns NULL if an instance could not be created for some reason. The pointer is returned as a generic pointer, but can be cast to the appropriate type.
For example:
SoCube *c = (SoCube *) SoCube::getClassTypeId().createInstance();
is a convoluted way of creating a new instance of an SoCube.
Returns the type associated with the given name.
Adds all types derived from the given type to the given type list.
Returns the number of types added.
Returns the name associated with a type.
Returns the type of the parent class.
Returns TRUE if the type is a bad type.
Returns TRUE if the type is derived from type t.
Returns TRUE if this type is not the same as the given type.
Less-than comparison operator that can be used to sort types.
This is pretty useless otherwise.
Returns TRUE if this type is the same as the given type.
|
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_type.html
|
CC-MAIN-2020-16
|
refinedweb
| 314
| 68.36
|
[using the updated nbd list email] On 02/14/2018 08:35 AM, Vladimir Sementsov-Ogievskiy wrote:
Hi all.Just note: looks like we allow zero-sized metadata context name. Is it ok? * |NBD_REP_META_CONTEXT| (4) A description of a metadata context. Data: o 32 bits, NBD metadata context ID. o String, name of the metadata context. This is not required to be a human-readable string, but it MUST be valid UTF-8 data.
No; elsewhere we state:
Metadata contexts are identified by their names. The name MUST consist of a namespace, followed by a colon, followed by a leaf-name. The namespace must consist entirely of printable non-whitespace UTF-8 characters other than colons, and be non-empty. The entire name (namespace, colon, and leaf-name) MUST follow the restrictions for strings as laid out earlier in this document. Namespaces MUST be consist of one of the following: base, for metadata contexts defined by this document; nbd-server, for metadata contexts defined by the implementation that accompanies this document (none currently); x-*, where * can be replaced by an arbitrary string not containing colons, for local experiments. This SHOULD NOT be used by metadata contexts that are expected to be widely used. A third-party namespace from the list below.
So a name must be at least 2 bytes (for a one-byte namespace, if someone ever registers one - and supposing that namespace has zero-byte leaf names), but will more commonly be even longer.So a name must be at least 2 bytes (for a one-byte namespace, if someone ever registers one - and supposing that namespace has zero-byte leaf names), but will more commonly be even longer.
-- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
|
https://www.mail-archive.com/qemu-block@nongnu.org/msg34328.html
|
CC-MAIN-2018-47
|
refinedweb
| 298
| 64.51
|
Parse suboptions from a string
#include <stdlib.h> int getsubopt( char** optionp, char* const* tokens, char** valuep );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.():
#include <stdlib.h> char *myopts[] = { #define READONLY 0 "ro", #define READWRITE 1 "rw", #define WRITESIZE 2 "wsize", #define READSIZE 3 "rsize", NULL}; main(argc, argv) int argc; char **argv; { int sc, c, errflag; char *options, *value; extern char *optarg; extern int optind; . . . while((c = getopt(argc, argv, "abf:o:")) != -1) {flag) { /* print usage instructions etc. */ } for (; optind < argc; optind++) { /* process remaining arguments */ } ... }
During parsing, commas in the option input string are changed to null characters. White space in tokens or token-value pairs must be protected from the shell by quotes.
|
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/g/getsubopt.html
|
CC-MAIN-2018-09
|
refinedweb
| 127
| 66.44
|
25 January 2010 19:03 [Source: ICIS news]
HOUSTON (ICIS news)--?xml:namespace>
“Oil in the $70/bbl range becomes a manageable exercise,” said chief executive William Hickey during a conference call with investors.
He noted that the company had become accustomed to dealing with raw material cost volatility in the past few years.
Sealed Air is a large-volume converter of polyethylene (PE), polypropylene (PP) and polyurethane.
The packaging manufacturer expected double-digit sales growth for developing markets in 2010.
Hickey said the company was well-positioned in Asia and eastern Europe, and the
“But we’ve made no commitments at this time,” he added.
Sealed Air posted 2009 net earnings of $244m (€173m), a 37% increase from $178m in 2008.
($1 = €0.71)
|
http://www.icis.com/Articles/2010/01/25/9328676/us-sealed-air-sees-modest-2010-resin-price-increases.html
|
CC-MAIN-2015-22
|
refinedweb
| 125
| 54.63
|
NPM compatibility list and declaration files for Deno This repository hosts a list of NPM libraries compatible with Deno as well as their definition files to interoperate with TypeScript inside the Deno compiler.A curated list for compatible NPM libraries and their definition files for its use with Deno
This repository hosts a list of NPM libraries compatible with Deno as well as their definition files to interoperate with TypeScript inside the Deno compiler.
Because Deno only resolves fully qualified file names, type definitions that import other type definitions might not work with Deno. Also, when some type definition supply some global interfaces, they can conflict with Deno. The types located here have been validated to work with Deno.
There are several ways these type definitions can be referenced. Likely the "best" way is that the CDN provider provides a header of
X-TypeScript-Types which points to the type definitions. We are working to have this available, but currently you would need to use the compiler hint of
@deno-types. For example to import React:
// @deno-types="" import React from "[email protected]";
or
// @deno-types="" import React from "[email protected]";
JSPM transformation of most libraries export everything through the default namespace, so most of the time it might not be suited to work along with this definition library.
Feel free to open an issue about a module you would like to see in this library. If you would like to add a package to this list, open a pull request and I'll review it as fast as possible.
All the libraries added must follow the following structure
. ├── ... ├── module_name # Original library name │ ├── version # Specific version of the file, must follow 'v' convention Ex: v1.0.0, v.2.3.1 │ ├── definition_file.ts # File with the definitions for the library. The library can contain as many of these as needed │ ├── tests # Folder containing tests for the library │ ├── definition_file_test.ts # The definitions in this file should map to the main entrypoint of the library. The tests should use the Deno test library └── ...
If the test files follow the test pattern they should be picked by deno test
To run the test suite simply run the following command (Deno v1.3+)
deno test -c tsconfig.json
Author: Soremwar
|
https://morioh.com/p/61ea577f5958
|
CC-MAIN-2020-50
|
refinedweb
| 376
| 53.21
|
Technical Articles
EIPinCPI – Routing Slip
Previous – Scatter-Gather | Index | Next – Process Manager
This week, we’ll study a routing pattern known as Routing Slip.
When do I use this pattern?
A routing slip contains information about the steps that need to be performed on a given message. For examples, IDocs received from an ECC system may require different steps based on the message type. These steps can be configured in a Content-Based Router for each message type.
Routing Slip in CPI
In CPI, there does not exist a component for Routing Slip. I will implement a custom Routing Slip in this blog.
We need two components to implement the Routing Slip pattern:
- A Content-based Router flow to attach the routing slip based on the message type
- A Routing Slip processor component that routes the message to the next step in the Routing Slip
Integration Flow – Routing Slip Attacher
Routing Slip Attacher
In this integration flow, the steps for each message type are configured.
Firstly, ‘Message Type?’ router routes the message based on the message type. The configuration is as follows:
Secondly, on each route, a Content Modifier is used to set Header ‘RoutingSlip’. These are the values for each message type:
Orders are mapped to target schema, customer information is added in the order information and, finally sent to the target system. Whereas, Invoices are mapped and sent to the target system.
The advantage of using Routing Slip is that the individual steps of enriching with customer information and sending IDocs to the target system can be reusable. The routing slip determines steps that a message needs to take.
Integration Flow – Routing Slip Processor
Routing Slip Processor
In this integration flow, if the previous step was the last step, then processing is ended, else the next step in the routing slip is determined using a Groovy Script.
This is the code for determining the next step:
import com.sap.gateway.ip.core.customdev.util.Message def Message processData(Message message) { def routingSlip = message.getHeader('RoutingSlip', String) def steps = routingSlip.split(',') def nextStepIndex = steps.indexOf(message.getHeader('PreviousStep', String)) + 1 message.setHeader('NextStep', steps[nextStepIndex]) return message }
Integration Flow – Individual Steps
The individual step must set header ‘PreviousStep’ with the path that it exposes and forward the message to Routing Slip Processor.
Mapping Flows
Mapping Flows
Enrich with Customer Information
Enrich with Customer Information
Send to Target
Send to Target
Execution
Orders
These are the steps taken in processing orders:
- When the IDoc is dispatched from ECC, it is first received by Routing Slip Attacher. Routing Slip Attacher attaches the Routing Slip by setting the header ‘RoutingSlip’ and forwards the message to Routing Slip Processor.
- Routing Slip Processor recognises that the next step is to map the payload and forwards the message to Mapping flow.
- Mapping flow sets the ‘PreviousStep’ header, maps the payload, and forwards it to Routing Slip Processor.
- Routing Slip Processor recognises that enriching is the next step and forwards it to ‘Enrich with Customer Information’ flow.
- ‘Enrich with Customer Information’ sets the ‘PreviousStep’ header, enriches the payload with Customer Information, and forwards it to Routing Slip Processor.
- Routing Slip Processor recognises that sending to target is the next step and forwards it to ‘Send to Target’ flow.
- ‘Send to Target’ flow sets the ‘PreviousStep’ header, sends the payload to target and forwards the response to Routing Slip Processor.
- Routing Slip Processor recognises that the previous step was the last one and ends the processing.
In essence, the payload takes this trip as configured in the Routing Slip Attacher:
Routing Slip Attacher => Routing Slip Processor => Mapping => Routing Slip Processor => Enricher => Routing Slip Processor => Send to Target => Routing Slip Processor.
Invoices
For invoices, steps 1, 2, 3, 6, 7, and 8 are performed.
The Invoice payload takes this trip as configured in the Routing Slip Attacher:
Routing Slip Attacher => Routing Slip Processor => Mapping => Routing Slip Processor => Send to Target => Routing Slip Processor.
Limitations of this implementation
- The same step cannot be used twice. At the moment, if the same step is mentioned twice in Routing Slip, then either the processing will get stuck in an infinite loop or the processing will end in an unexpected error.
- While using the same step twice in a Routing Slip seems rare, this limitation can be overcome by appending to ‘PreviousStep’ header instead of setting it. We will have to rename the header to ‘PreviousSteps’ of course ;).
- ProcessDirect in CPI does not support asynchronous processing. This means that the Routing Slip Attacher flow will keep waiting for response until all the steps in the Routing Slip are completed.
- This limitation can be overcome by using JMS in place of ProcessDirect.
Conclusion
Routing Slip enables the developer to code reusable steps and to use them in a configurable manner based on a condition. A custom implementation is required to use Routing Slip pattern in CPI.
References/Further Readings
- Routing Slip Pattern in Enterprise Integration Patterns
- EIPinCPI Content-based Router
- CPI Components
Hope this helps,
Bala
P.S.: What do you think of my implementation of Routing Slip pattern? Did you spot any improvements? Do you think it can be implemented in a different way? Please let me know your thoughts in comments.
Previous – Scatter-Gather | Index | Next – Process Manager
|
https://blogs.sap.com/2020/05/10/eipincpi-routing-slip/
|
CC-MAIN-2022-05
|
refinedweb
| 877
| 53.81
|
I'm slowing ramping up on the use of Entity Developer and its going well. The number of entities is growing and it would be preferable to organize them into namespaces based on the area of functionality. It would be nice to have the Entity Developer class hieararchy UI be namespace aware but diagrams can be used to group the classes.
The class list below the .hbml file in Visual Studio is starting to get less manageable. Its great that the Entity Developer custom tool provides this functionality but it would be preferable that it had a namespace sub-tree as well. I will add these namespace feature requests to user voice. In the meantime, does anyone have any suggested approaches for handling the organization of large entity models within both the Entity Developer UI and the Visual Studio project system?
FYI: ... me-space-i
Organizing classes in namespaces as the entity model gets larger
Discussion of open issues, suggestions and bugs regarding Entity Developer - ORM modeling and code generation tool
Post Reply
3 posts • Page 1 of 1
Re: Organizing classes in namespaces as the entity model gets larger
We recommend you to split the model into several thematic diagrams: ... html#split.
Re: Organizing classes in namespaces as the entity model gets larger
Will do, but its not ideal - hence the user voice submission.
Post Reply
3 posts • Page 1 of 1
|
https://forums.devart.com/viewtopic.php?f=32&t=35006
|
CC-MAIN-2018-34
|
refinedweb
| 233
| 50.46
|
Ruby on Rails + OmniAuth + Devise: Implementing Custom User Attributes
When starting my first Ruby on Rails application, I knew one thing for certain: instead writing line after line of code to manage user sign up, sessions and authentication, I wanted to take advantage of a great Ruby gem called Devise. For the uninitiated, Devise is billed as a “flexible authentication solution”, but that’s quite an understatement in my opinion. You can check out everything it does here, but to put it simply, Devise handles everything from user registrations, to sessions, to forgotten passwords and everything in between.
Using Devise relieves you of a lot of work and potential headaches and/or bugs. However, if want your User model to have any attributes other than email and password, adjustments will need to be made. Luckily, this is as simple as adding those custom fields to Devise’s pre-built forms! Easy enough.
Devise is a powerful tool on its own, but the project requirements for my app also called for the use of OmniAuth to allow users to sign in or sign up through a third-party like Facebook, Google, or GitHub (I ended up going with Facebook using the OmniAuth-Facebook gem). This is where that hurdle of implementing your additional model attributes into this sign-up flow will be a bit more tricky.
As an example of the issue at hand, let’s say I’m building a job finder Rails app. I have three models: Users, Jobs, and Applications. Users can either sign up as an employer or an applicant (I created an ‘employer’ attribute with a boolean value) — only employers can post jobs, and only applicants can apply to them. With the standard OmniAuth implementation, a user will click a link to direct them to the third-party sign-in form, and once they sign in, they’ll be redirected back to our server in order to create a User instance. What we need to do is get some data from the user before they click that “Sign In with Facebook” link, store that data somewhere, and pass it back to our model upon the User instance being created. So let’s get to it!
NOTE: We are only setting up this user flow for a new registration/sign up, not a new session/log in. This is because a user should only have to specify what type of user they are once — not every time they visit the site.
Step #1 is building out a route and corresponding view and controller action to get that sweet sweet data from our user. In our example, we simply want them to select between signing up as an employer or applicant. Here’s our route for that page — use whatever naming convention you prefer as these routes won’t exactly be following RESTful conventions.
devise_scope :user do
get '/confirm', to: 'users/omniauth_callbacks#landing', as: 'get_landing'
end
Note that this route will be nested in the :user devise_scope because we are utilizing Devise in our application. Now that we have our route, let’s establish our controller action:
def landing render 'devise/registrations/landing'end
That was easy! Ok, onto our form — for this example, we just need a couple radio buttons and a submit button, plus a POST route and controller action to send the data we receive back to our model. There are a few ways to get this done (and mine probably isn’t the best!), but here’s my form code (including some Bootstrap styling):
<%= form_with url: sending_path do |f| %> <%= f.label :employer, class: "form-check-label" %> <%= radio_button_tag :employer, 1, class: "form-check-input" %> <%= f.label :employer, "Applicant", class: "form-check-label" %> <%= radio_button_tag :employer, 0, class: "form-check-input" %> <br><br> <%= f.submit "Submit", class: "btn btn-outline-primary" %><% end %>
And here’s the sending_path route that is being used as that form’s URL (placed in the same block under devise_scope :user as the previous route):
post '/confirm', to: 'users/omniauth_callbacks#add_to_session', as: 'sending'
Ok, now we’re cooking! This is where the rubber meets the road — we’re going to save that user-provided data into the user’s session (a great place to store small amounts of data needed through multiple requests!). Since my employer attribute is a boolean, I’m going to stick with 1’s and 0’s. As you can see in the form above, selecting ‘employer’ will set a value of 1, and selecting ‘applicant’ will set a value of 0. Here’s how we move that data into our sessions hash in the controller:
def add_to_session if params[:employer] == "1" session[:employer] = 1 else session[:employer] = 0 end redirect_to staging_pathend
Super simple, right? Now we have access to the user’s selection in the session[:employer] hash from any controller. That last line of code is redirecting the user to a landing page which finally has the ‘Sign In with Facebook’ button we’ve been waiting for (for Facebook Oauth, it’s taking the user to /users/auth/facebook). We’re almost there — but if we click that link now it will send the user through the standard Oauth sign-up flow, and we need to first make some adjustments so that our user is created with their user-type selection in mind.
First, let’s add some logic to the #facebook controller action to parse our session[:employer] hash into a boolean value that our model will be able to read and assign to the employer attribute.
employer = falseif session[:employer] == 1 employer = trueend
Next, we’ll update the .from_omniauth method (this should’ve been set-up when first implementing OmniAuth!). My solution was to add a second argument to this class method — in my case, the ‘employer’ variable which houses that boolean value. Once inside that method, set the user’s employer attribute equal to that variable you just passed in, and boom — your user who logged in via Facebook now has a designated user type.
NOTE: If you have OmniAuth set up to not only create but update a user upon signing in, you’ll want to use the fancy ||= operator when setting the user’s employer value. This is so that when a returning user signs back in using Oauth, the existing value of their employer attribute is used instead of a new (and likely nil) value.
The final .from_omniauth method should look something like this, more or less:
def self.from_omniauth(auth, employer) user = find_or_initialize_by(provider: auth.provider, uid: auth.uid) user.email = auth.info.email user.password = Devise.friendly_token[0, 20] user.name = auth.info.name user.employer ||= employer user.image = auth.info.image user.save userend
And there you have it. Now, not only do you have a modern sign-up/log-in system, but it’s customized to your liking. As I was working through this process I wasn’t able to find any blogs documenting how to go about this, so if you encountered the same problem, I hope this helped!
|
https://brian-firestone.medium.com/ruby-on-rails-omniauth-devise-implementing-custom-user-attributes-3d71ddce604a?source=post_internal_links---------4----------------------------
|
CC-MAIN-2021-25
|
refinedweb
| 1,173
| 57.61
|
With help from this tutorial you will learn how to use the Yahoo Weather API to obtain and display weather forecasts with AS3.
Final Result Preview
Let's take a look at the final result we will be working towards:
Step 1: Create a New File
I'm assuming you'll use Flash, though you can do this with a Flex or standard AS3 Project.
Open Flash, go to File > New, select Flash File (ActionScript 3.0), then set the size as 320x180px and save the FLA wherever you want.
Step 2: Create a Document Class
Now go to File > New and this time select ActionScript File, then save it as Weather.as in the same folder where your saved your FLA file. Then go back to your FLA, go to Properties and write the name of the ActionScript file in the "Class" space. (For more info on using a document class, read this quick introduction.)
Step 3: Setting Up the Document Class
Go to the ActionScript file and write the code for your document class:
package{ import flash.display.MovieClip; //the name of the class has to be the same as the file public class Weather extends MovieClip{ //Constructor: this function has to have the same name as the file and class public function Weather(){ trace("This is your weather class"): } } }
Test it and it should trace "This is your weather class" in the output window.
Step 4: Check Out the Yahoo Weather API
Get yourself off to the Yahoo Weather API section of the Yahoo developers site; there you will find some explanations about the Yahoo Weather API.
Step 5: Ask For Your XML
What we need to read in Flash is an XML file, so we need to know how to ask for it, which is pretty simple. You need to think about where you want to know the weather and in what unit (Celsius or Fahrenheit) you want the temperature. Then, you can get XML with this data through this URL:
var url:String = "" + "?w=" + (location number) + "&u=" + ("c" for celcius or "f" for fahrenheit);
Step 6: Getting the Location Number
The location number needs to be a WOEID. To find your WOEID, browse or search for your city from the Yahoo:, so the WOEID is 2442047.
Step 7: Understanding the XML
When you request any weather location, what you'll receive is XML like this:
<rss version="2.0" xmlns: <channel> <title>Yahoo! Weather - Los Angeles, CA</title> <link>*</link> <description>Yahoo! Weather for Los Angeles, CA</description> <language>en-us</language> <lastBuildDate>Mon, 01 Mar 2010 5:47 am PST</lastBuildDate> <ttl>60</ttl> <yweather:location <yweather:units <yweather:wind <yweather:atmosphere <yweather:astronomy  <item> <title>Conditions for Los Angeles, CA at 5:47 am PST</title> <geo:lat>34.05</geo:lat> <geo:long>-118.25</geo:long> <link>*</link> <pubDate>Mon, 01 Mar 2010 5:47 am PST</pubDate> <yweather:condition <description><![CDATA[> <img src=""/><br /> <b>Current Conditions:</b><br /> Fair, 12 C<BR /> <BR /><b>Forecast:</b><BR /> Mon - Mostly Cloudy. High: 20 Low: 10<br /> Tue - AM Clouds/PM Sun. High: 19 Low: 9<br /> <br /> <a href="*">Full Forecast at Yahoo! Weather</a><BR/><BR/> (provided by <a href="" >The Weather Channel</a>)<br/> ]]></description> <yweather:forecast <yweather:forecast <guid isPermaLink="false">USCA0638_2010_03_01_5_47_PST</guid> </item> </channel> </rss><!-- api7.weather.re4.yahoo.com compressed/chunked Mon Mar 1 06:59:00 PST 2010 -->
(If you want to understand all the XML, please visit.)
For this application what we need is the yweather:location tag, yweather:atmosphere tag and the yweather:forecast tags: the location tag will give us the text for the location, the atmosphere tag will give us the humidity and the forecast tags will give us the temperature for the current and the next day.
Step 8: Parse It
Now that we have a better understanding of all that XML, what we need to do is assign data to variables so that we can use that data to set up our application. For that we need to create some variables and load the XML. This is how you do it (put the code in the relevant places in your document class):
//This is going to contain all the data from the XML private var _xmlData:XML; //This is going to be the url of the XML that we will load private var _xmlURL:String; private function loadXML(xmlURL:String):void { var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest(_xmlURL); loader.load(request); loader.addEventListener(Event.COMPLETE, loadData); }; }
Let's go over that lump of code.
You need the _xmlData variable to be defined outside all functions (I've defined it as a private variable) because you will need to get it everywhere in the code, not just within one function.
The first function, loadXML(), loads the XML file into Flash; we use an event listener to check when it's completed, then run loadData().
The loadData() function assigns the received data to the _xmlData variable that we have already created. We use a namespace because that's how Yahoo decided to set up their XML (you can find more about namespaces at livedocs.adobe.com). The other variables in this function extract the information that we want to show in our app from the XML.
(For more info on parsing XML in AS3, check out Dru Kepple's AS3:101 - XML tutorial.)
Step 9: Create Text Fields
Now we need to display that information. To do so we could create text fields in the code and assign a format and the text, but I prefer to use the Flash IDE, to save time. So get creative, we need eight text fields: temperature, humidity, maximum temp and minimum temp for the current day. Then we need maximum temp and minimum temp for the next day, one for the name of the next day and one more that shows the location. They all need to be dynamic text fields so we can assign the info.
Don't forget to give all your textfields instance names; I've chosen temp, humidity, max, min, maxt, mint, tomorrow and state.
Step 10: Display the Info
Now that we've created the text fields, we need to assign the information that we retreived from the XML. For that we need the instance name of each text field and the info that we already have, like this (adding to your existing loadData() function):; //Assigning the information to the text fields maxt.text = _xmlData.channel.item.yweather::forecast[1].@high + " °F"; mint.text = _xmlData.channel.item.yweather::forecast[1].@low + " °F"; state.text = _xmlData.channel.yweather::location.@city; humidity.text = _xmlData.channel.yweather::atmosphere.@humidity + " %"; temp.text = _xmlData.channel.item.yweather::condition.@temp + " °F"; max.text = _xmlData.channel.item.yweather::forecast[0].@high + " °F"; min.text = _xmlData.channel.item.yweather::forecast[0].@low + " °F"; switch (day) { case "Sun": tomorrow.text = "Monday"; break; case "Mon": tomorrow.text = "Tuesday"; break; case "Tue": tomorrow.text = "Wednesday"; break; case "Wed": tomorrow.text = "Thursday"; break; case "Thu": tomorrow.text = "Friday"; break; case "Fri": tomorrow.text = "Saturday"; break; case "Sat": tomorrow.text = "Sunday" break; } }
Remember the eight text fields that we created? Now we have to use those names here in the code. That switch statement is because we don't want to show just "Wed", "Thu" or "Fri", we want the entire name.
Step 11: Add Some Style
Right now we have just text; it would be nice to add some icons depending on the weather for that day. So what we need is to create or look for a set of weather icons and assign an icon depending on the weather. We can load one image from Yahoo, but it's not that nice so we'll find our own set. For that, download a set of icons and import them to Flash, then export each one for ActionScript with an appropriate class name:
The icons I'm using are from Garmahis and can be downloaded from garmahis.com. Big thanks to Garmahis for letting us use them!
Step 12: Adding the Icon
Now we have to load the correct icon depending on the weather code that we have in our XML. Just like the names of the days, we can do this with a really big switch... but first we need to create a movie clip to contain the icon.
private var _weatherToday:MovieClip = new MovieClip; private var _weatherTomorrow:MovieClip = new MovieClip; //below code goes in constructor addChild(_weatherToday); addChild(_weatherTomorrow); _weatherToday .x = -80; _weatherToday .y = -40; _weatherTomorrow .x = 115; _weatherTomorrow .y = -60;
And now the icons:
//this code goes in the loadData() function switch (codeToday) { case "28": case "3200": case "30": case "44": var weather01:weather01 = new weather01(); _weatherToday.addChild(weather01); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "32": case "34": var weather02:weather02 = new weather02(); _weatherToday.addChild(weather02); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "24": case "25": var weather03:weather03 = new weather03(); _weatherToday.addChild(weather03); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "0": case "1": case "2": case "22": case "36": case "42": case "43": var weather04:weather04 = new weather04(); _weatherToday.addChild(weather04); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "19": case "20": case "21": case "23": case "26": var weather05:weather05 = new weather05(); _weatherToday.addChild(weather05); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "41": case "46": var weather06:weather06 = new weather06(); _weatherToday.addChild(weather06); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "3": case "4": case "37": case "38": case "39": case "45": case "47": var weather07:weather07 = new weather07(); _weatherToday.addChild(weather07); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "31": case "33": var weather08:weather08 = new weather08(); _weatherToday.addChild(weather08); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "27": case "29": var weather09:weather09 = new weather09(); _weatherToday.addChild(weather09); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "5": case "6": case "7": case "35": var weather10:weather10 = new weather10(); _weatherToday.addChild(weather10); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "8": case "10": case "13": case "14": case "15": case "16": case "17": case "18": var weather11:weather11 = new weather11(); _weatherToday.addChild(weather11); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "9": case "11": case "12": var weather12:weather12 = new weather012(); _weatherToday.addChild(weather12); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; case "40": var weather13:weather13 = new weather13(); _weatherToday.addChild(weather13); _weatherToday.scaleX = 0.7; _weatherToday.scaleY = 0.7; break; }
In this case I just used 13 icons but you can use more if you want, or fewer, that's up to you. Just remember, Yahoo uses 40 codes, so you have to assign them all to an icon. You can see a list of the meanings of all the codes at developer.yahoo.com.
Step 13: Add a Flip Effect
Well, we've covered the hard part; now let's make it look nice. If we want to add more information or change the location we will need more space, so we'll put all that we have created into one movie clip. To do that, just select it all, press F8 (to convert it to a symbol) and export your new symbol for ActionScript, with a class name of Front. Then erase it from the stage, create the background, and convert this to a movie clip and export it for ActionScript too, with a class name of Back.
Now let's call them from our ActionScript file:
private var _front:Front; private var _back:Back; //all below code goes in Weather() constructor _front = new Front(); this.addChild(_front); _front.y = 100; _front.x = 160; _font.rotationY = 0; _front.btn.buttonMode = true; _front.btn.addEventListener(MouseEvent.CLICK, turnAround); _front.addChild(_weatherToday); //this is going to be behind so we don't want it to be visible yet, and we need to set the rotation to -180 _back = new Back(); _back.y = 100; _back.x = 160; _back.back.buttonMode = true; _back.back.addEventListener(MouseEvent.CLICK, turnAround); _back.rotationY = -180; _back.visible = false; this.addChild(_back);
Step 14: Set Up the Tween
We have our movie clip, so now we need to make it flip. To do that we are going to use the Tweener library which you can find at. Download it, and extract it so that the \caurina\ folder is in the same folder as your FLA.
For this project we are going to use just one function from it: we'll make it flip using the turnAround() function to look cool. Put the following code in the appropriate places in your document class:
import caurina.transitions.Tweener; private var _currentFace:String; //flip the faces and then calls the function that change the order of the faces and finish the animation private function turnAround(event:MouseEvent):void { Tweener.addTween(_back, { rotationY: -90, onComplete:changeIndex, time:0.5, transition:"linear" } ); Tweener.addTween(_back, { scaleY:0.6, scaleX:0.6, time:0.3, transition:"linear" } ); Tweener.addTween(_front, { scaleY:0.6, scaleX:0.6, time:0.3, transition:"linear" } ); Tweener.addTween(_front, { rotationY:90, time:0.5, transition:"linear" } ); } //we use a String, _currentFace, so it can know which face is in front private function changeIndex():void { if (_currentFace == "front") { this.setChildIndex(_front, 0); Tweener.addTween(_back, { rotationY: 0, time:0.5, transition:"linear" } ); Tweener.addTween(_back, { scaleY:1, scaleX:1, time:0.6, transition:"linear" } ); Tweener.addTween(_front, { rotationY:180, time:0.5, transition:"linear" } ); Tweener.addTween(_front, { scaleY:1, scaleX:1, time:0.6, transition:"linear" } ); _currentFace = "back"; _front.visible = false; _back.visible = true; } else { this.setChildIndex(_back, 0); Tweener.addTween(_back, { rotationY: -180, time:0.5, transition:"linear" } ); Tweener.addTween(_back, { scaleY:1, scaleX:1, time:0.6, transition:"linear" } ); Tweener.addTween(_front, { rotationY:0, time:0.5, transition:"linear" } ); Tweener.addTween(_front, { scaleY:1, scaleX:1, time:0.6, transition:"linear" } ); _currentFace = "front"; _front.visible = true; _back.visible = false; } }
Step 15: Add Locations
Now that we have more space in the back we can add more states or info or whatever you want. Briefly, I'll add more locations. What we need to do is to go to Flash and press Ctrl+F7 (Windows) or Command+F7 (Mac) to reveal the Components panel. Drag the Combo Box to your Library, then add this to your document class:
import flash.xml.*; _comboBox = new ComboBox(); //inside the constructor //the default text _comboBox.prompt = "Choose your location:"; //repeat this for each location that you want to add //remember to get the location's URL from the Yahoo site comboBox.addItem( { Location:"Mahtomedi", url: ""} ); //calls the function that give the value to the ComboBox _comboBox.labelFunction = nameLabelFunction; _comboBox.width = 150; _comboBox.editable = false; //calls the function that is going to change the data _comboBox.addEventListener(Event.CHANGE, changeLocation); private function nameLabelFunction(item:Object):String { var str:String; if (item == null) { str = _comboBox.value; } else { str = item.Location ; } return str; } //reaload the data and reassign the data of your application private function changeProvince(event:Event):void { loadXML(_comboBox.selectedItem.url); }
Step 16: Enjoy!
Now enjoy your application, add fancy stuff and credits (don't forget Yahoo!)
Conclusion
Now we have our weather application I hope you learned a lot, if you have any questions just leave a comment.
I hope you liked this tutorial, thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/build-a-handy-as3-weather-app-with-the-yahoo-api--active-3651
|
CC-MAIN-2017-13
|
refinedweb
| 2,610
| 57.37
|
Created on 2020-11-19 20:36 by jmg, last changed 2020-11-21 03:50 by terry.reedy.
per:
There is not a way to document fields of a dataclass.
I propose that instead of making a language change, that an additional parameter to the field be added in similar vein to property.
This currently works:
```
class Foo:
def getx(self):
return 5
x = property(getx, doc='document the x property')
```
So, I propose this:
```
@dataclass
class Bar:
x : int = field(doc='document what x is')
```
This should be easy to support as I believe that the above would not require any changes to the core language.
How would you expect to extract this docstring?
I'm not sure how this would work in practice, since both of these are errors:
>>> class A:
... def __init__(self):
... self.x = 3
... self.x.__doc__ = 'foo'
...
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in __init__
AttributeError: 'int' object attribute '__doc__' is read-only
>>> class B:
... x: int = 0
... x.__doc__ = 'foo'
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in B
AttributeError: 'int' object attribute '__doc__' is read-only
It could be stored in the dataclass-specific data attached to a class, but then you'd have to use a dataclass-specific function to get access to it. I'm not sure that's a great improvement.
I also note that attrs doesn't have this feature, probably for the same reason.
As I said, I expect it to work similar to how property works:
```
>>> class Foo:
... def getx(self):
... return 5
... x = property(getx, doc='document the x property')
...
>>> help(Foo)
Help on class Foo in module __main__:
class Foo(builtins.object)
| Methods defined here:
|
| getx(self)
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| x
| document the x property
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
```
The pure python implementation of property is at:
which uses the descriptor protocal as documented in:
Which uses an object w/ __get__/__set__/__delete__ to emulate attribute access, and that object can have the __doc__ property set on it to provide documentation.
@property has a place to attach the docstring, dataclasses in general do not. I wouldn't want to add a descriptor just to have the ability to add a docstring. There are performance issues involved, and I'm sure some corner cases where functionality would change.
Maybe if you bring this up on python-ideas you can get some more ideas.
> There is not a way to document fields of a dataclass.
I don't think there is an easy way to do this without a custom descriptor and that would add a lot of overhead.
Documenting dataclass fields strikes me as a reasonable request. Since most most field values cannot have a .__doc__ attribute...
> It could be stored in the dataclass-specific data attached to a class,
The obvious place.
> but then you'd have to use a dataclass-specific function to get access to it.
There are already at least 2 functions for getting docstrings for objects without .__doc__. They could be enhanced with a dataclass-specific clause.
If ob.__doc__ is None, inspect.getdoc(ob) looks for a docstring elsewhere and sometimes succeeds. For this reason, I just changed IDLE calltips to use getdoc for calltips. If dataclasses.is_dataclass(ob.__class__), then I presume getdoc could retrieve the docstring from ob.__class__.__dataclass_fields__.
help(ob) uses pydoc, which gets docstrings with _getowndoc and if needed, _finddoc. Perhaps these should be replaced with inspect.getdoc, but until then, _finddoc could have an elif clause added for dataclass fields.
|
https://bugs.python.org/issue42414
|
CC-MAIN-2020-50
|
refinedweb
| 624
| 62.88
|
Tax
Have a Tax Question? Ask a Tax Expert
Hi,
please find here a reference to Michigan business tax forms -
For sales tax issues - please refer to 2009 Instruction Book -
You must register to pay sales tax if you make retail sales of goods in Michigan even if the items you sell are not taxable.
Who is required to have a sales tax license? - Individuals or businesses that sell tangible personal property to the final consumer need a sales tax license. An application for a sales tax license may be obtained on our web site. In order to register for sales tax, please follow the application process.
See additional information here -
Michigan DOT offers Tax Seminars for New Business - see registration form for
Coming in Spring 2009 here -
You do not have to register LLC or corporation - unless you want to legally separate the business activities from your personal activities.
In this case you will be considered a self-employed.
Based on your estimated income - you need to pay quarterly estimated taxes to the IRS -
and to the Michigan DOT - 2009 Estimated Individual Income Tax Voucher -
Let me know if you need any help.
The IRS expects taxes to be paid as income is earned or received.
Assuming - you are expecting gross income for 2009 to be $30,000 and your qualified business expenses $5000 - so you estimated net income would be $25,000
If you are single , standard deductions, no dependents, no other deductions or credits - your will owe self-employment taxes $25,000*15.3%=$3825
your taxable income will be $25,000 - $1913(half of self-employment taxes) - $5600(standard deduction) - $3600(personal exemption) = $13887
and your estimated income tax liability is $1666
Your total estimated tax liability would be $1666+$3825=$5491
Generally, you should make estimated tax payments for 2009 if at the tax time you will owe more than $1000 and the total amount of tax withheld and your tax credits will be less than the smaller of: -- 90% of the tax to be shown on your 2008 tax return; or -- 100% of the tax shown on your 2007 tax return (110% if your 2007 adjusted gross income exceeded $150,000).
Michigan income tax rate is 4.35%
Some cities have additional taxes - see here -
Your Michigan taxable income based on assumptions above would be
$25,000 - $1913(half of self-employment taxes) - $3600(Exemption allowance) = $19487
Your estimated Michigan tax liability is $19487*4.35%=$848
You main task would be to keep a good record of your income and business expenses.
|
http://www.justanswer.com/tax/1vazz-starting-business-window-washer-michigan-need.html
|
CC-MAIN-2017-17
|
refinedweb
| 429
| 52.53
|
Io Programming/Beginner's Guide/Objects
Contents
Objects[edit]
Io is a dynamically-typed, dynamically-dispatched, object-oriented programming language, very much like Self and Smalltalk. Up until this point, we have only been dealing largely with the most primitive things in the language.
The time has come, however, for the reader to learn about objects and how they are used within the Io environment.
What are Objects?[edit]
An object, used in the context of computer programming, is an abstraction of a concept or tangible thing which, anthropomorphically, you can tell what to do. For example, this statement:
Io> writeln("Hello world!")
can be seen as telling the computer to write a line to the screen. An object oriented language allows us to embrace this concept and take it to a level where it is a bit more useful for our needs. If you refer back to your first Io program, there was one line in it which read this:
you := File standardInput readLine
The expression is evaluated in the following order:
- First, File is evaluated. We're telling the computer to get us a File, whatever that is (we needn't be concerned with this just yet).
- Next, we tell the File to give us the standard input.
- Next, we tell that thing to read a line of text.
- Finally, we assign that to you.
What happens if we ask the computer itself to give us a standard input object?
Io> standardInput
·
Exception: Object does not respond to 'standardInput' --------- Object standardInput Command Line 1
What we're seeing is an error message. It's composed of several components, which you should familiarize yourself with. You'll be seeing a ton of these things as you start out.
- The top line (Exception: Object does not...) is telling us that Object doesn't respond to the standardInput message.
- The bottom part is a traceback, which tells us where the error occurred. This is invaluable when trying to figure out where something has gone wrong.
So far, we have seen how objects are used to partition some namespace into useful libraries. But, we really haven't seen anything yet; objects are substantially more than just a fancy name for libraries.
How do Objects Work?[edit]
In order to see how to employ objects to make our programs more modular, it is useful to know "how they work," so to speak.
It's useful to think of an object as this thing that can follow orders. What orders it can follow are determined by what appears in its slots.
For example, let's make a little object of our own. The object-oriented equivalent to the infamous hello world program is none other than:
Io> Account := Object clone Io> Account balance := 0 Io> Account withdraw := method(amount, self balance = self balance - amount) Io> Account deposit := method(amount, self balance = self balance + amount)
I've elided the outputs from the Io interpreter here for brevity. But what we've just done is:
- created an Account object.
- configured the object to have a balance of zero.
- taught the Account object how to withdraw something (presumably cash)
- taught the Account object how to deposit something
We can query this object for the current account balance easily enough:
Io> Account balance ==> 0
This works because executing Account is enough to tell the interpreter, "OK, I'm now running in the context of this Account thing." The next instruction is balance. The interpreter sees that Account clearly implements a balance, and so executes it. In this case, its value is zero. Hence, the
==> 0 report.
Another way of looking at it, and indeed, the preferred way, is to think, "Hey! Account! You there! Balance, now!". In other words, we're telling the Account object to give us the balance. As you can see, this is not really a whole lot different from File standardInput readLine -- the latter simply has more commands, but the mechanics is exactly the same.
Next, let's deposit some cash in our account.
Io> Account deposit(100) ==> 100
This produces an unexpected result -- why did it return 100, and not nil? It's because the last thing it computed was self balance + amount, which evaluates to 0+100 when you think about it.
Take a closer look -- when we execute deposit(100), the deposit slot is fetched in the Account object, just as balance was above. And as with balance, the interpreter executes this slot. However, instead of it just being an unadorned value, it is a method. Remember when I said we'd get to methods in a bit?
- Observation
If you're trying to tell objects to do things, then you are obviously giving them messages, which these objects know how to interpret (somehow). In this case, both balance and deposit are messages. However, the method of interpreting these messages is entirely up to the object itself. Hence the name, method. Now that we've resolved one mystery, we've introduced another -- we will get to the significance of methods versus procedures in just a bit.
- NOTE
Mathematically speaking, deposit is not a function, because it has what are called side-effects. This means that evaluating the message will cause some change in state that persists beyond the evaluation of the message. Pure mathematical functions never do this. And now you know why we don't refer to deposit as a function either!
When running inside deposit, we see that we need to reference our balance with self balance. Here, self is clearly referring to the object in which the method is running, thus granting us access to our object's data.
- Observation
By default, all slots are local to the inner-most lexical scope. In other words, there are no global variables.
- Exercise
Try to figure out how
Account withdraw(50) works!
Inheritance[edit]
I promise, this will be the last diversion before we can see how objects truly become useful. We need to find out how inheritance works, because it is a corner-stone in Io.
Looking again at our Account object above, the creation of the object involved cloning Object. In Io, everything is ultimately a kind of Object, but if you're creating something truly new and unique, you clone Object directly.
What this is saying is, well, an Account is an Object. But, we go on to specialize this object, with a balance, and a few methods to adjust the balance appropriately.
But, as with all things in Io, there is more to this very simple example than meets the eye. Consider, if we're always bossing objects around by sending them messages, where, then, does self get sent?
If you answered Object, you are not quite correct. Remember when I made the observation that, by default, everything in Io is always local? Method variables are no exception. The order in which Io looks things up is as follows:
- Look first in the method itself first. There is no local variable named self.
- Look next in the Account object. Nope, nothing here either.
- Look next in Object. AHA!
Each object implements a list of prototypes, which it uses as a source of inspiration, if you will, to determine how to handle messages. If you send an object a message that it doesn't know how to handle, then it'll consult with its prototypes to find out how. If, and only if, it fails in this task, do you get that infamous error message, Object does not respond to whatever.
Money, Money, Money![edit]
Now that we've seen the basics of what objects are, and how they can inherit from other objects, let's see how we can put this to good use.
For starters, let's assume that you want multiple accounts. There are many different kinds of accounts, but we'll stick with a basic savings account. These accounts, over time, accrue interest. But, accounts from different banks accrue interest at different rates. How do we manage this additional bit of complexity?
First, we need a savings account:
Io> SavingsAccount := Account clone
As we have seen here, we've created a new object, which relies on Account as its inspiration for how to behave. We can verify it has no methods of its own:
Io> SavingsAccount slotNames foreach(println) type ==> type
Well, OK, it has one slot, type, which when invoked will return
SavingsAccount. But nothing besides that. And, yet, we can still use it like a normal account object:
Io> SavingsAccount balance ==> 0
A savings account will typically have some interest rate:
Io> SavingsAccount interestRate := 0.045
With this, we can estimate how much we'll have at the end of the year:
Io> SavingsAccount yearEndEstimate := method( )-> self balance * self interestRate + self balance )-> )
So when this executes, the order of search is to try to find balance first in the method itself, then in SavingsAccount, then in Account, where it'll actually be defined.
Suppose we need bank accounts for your family. We can now do something like this:
Io> MomsAccount := SavingsAccount clone Io> DadsAccount := SavingsAccount clone Io> WifesAccount := SavingsAccount clone
So, we should be able to keep track of different accounts independently:
NOTE: this section describes behavior that does not happen with current Io version. See the Talk page for details.
Io> MomsAccount deposit(450) Io> DadsAccount deposit(450) Io> WifesAccount balance ==> 900
WHOA, how'd that happen? It looks like depositing all happens into a single balance!
- Exercise
Can you figure out why?
Regarding Classes, Prototypes, and Objects[edit]
You'll hear this term a lot in communities dedicated to object oriented programming, "instantiate an object of class x", where x is whatever class they happen to be talking about. For example, to create a new list, you might see mention of a certain kind of list class.
Io does not have any classes, and this is precisely why all the accounts of the preceeding section went into a single balance. Since none of the derived objects implements its own balance message, it assumes that the object's prototype knows how to handle it. It turns out, this assumption was a mistaken one.
How do we, then, instantiate an object such that it creates its own balance slot every time we need it? We do this with an init method:
Io> SavingsAccount init := method(self balance := 0)
In doing this, we have converted SavingsAccount from just any ol' object, into a class. We can now logically reason about SavingsAccount objects all day long, as if they were types of SavingsAccount. And, indeed, they are exactly that.
We can now re-instantiate our family's accounts:
Io> MomsAccount := SavingsAccount clone Io> DadsAccount := SavingsAccount clone Io> WifesAccount := SavingsAccount clone
What happens here is that, right after the cloning process, SavingsAccount init is sent to ensure the object is properly initialized.
Io> MomsAccount deposit(450) Io> DadsAccount deposit(450) Io> WifesAccount balance ==> 0
Note that we didn't have to manually implement, or re-implement, the withdraw or deposit methods. Our assumption that the basic Account object knows how to handle withdrawls and deposits do still hold. The only thing that was mistaken was where to withdraw from or deposit to. Therefore, by creating an account type that was aware of this confusion, we could take the opportunity to clarify before any further program code was executed.
It should be noted that all classes are prototypes -- templates for use by another object. But, not all prototypes are necessarily classes. As we've seen, Account is also a prototype, but strictly speaking, it is not a class. It took SavingsAccount and its init method to preserve the assumptions that allowed us to treat it as a type rather than a thing. Yet, we are able to freely define types in terms of things if we want to, as we've done in this example. In class-based object-oriented languages, this kind of flexibility is out of the question.
You Needn't Get Everything Right the First Time[edit]
One point to remember when writing OO programs is, you don't need to get it right the first time. Above, I said that we can have multiple kinds of accounts, each with different interest rates, but the way the software so far has evolved, it's not really easily done. So, let's fix things:
Io> Account init := method(self balance := 0) Io> SavingsAccount removeSlot("init") Io> AccountWithInterest := Account clone Io> AccountWithInterest yearEndEstimate := SavingsAccount getSlot("yearEndEstimate") Io> SavingsAccount removeSlot("yearEndEstimate") Io> SavingsAccount setProto(AccountWithInterest)
With a small handful of statements, we've just re-engineered the entire type hierarchy. Now, Account is the (so-called) base class, and SavingsAccount is now just a specialized kind of AccountWithInterest. Software which relied on SavingsAccount should still run, since we didn't change in any fundamental way what SavingsAccount objects do.
Now, we can go so far as to define what a CheckingAccount is:
Io> CheckingAccount := AccountWithInterest clone Io> CheckingAccount interestRate := 0.015
This is why folks rarely use checking accounts to save up money.
Io> MomsChecking := CheckingAccount clone ...etc...
Obviously, this is a very simplistic example, but it shows you that changes to the relationships between objects, prototypes, and classes can be changed, potentially even while a program is still running. They are not quite so static as they are with other languages.
That being said, I want to make it clear, that this kind of programming is great for exploratory work, but you should not depend on this for production code. If you "hot-fix" software like this, be sure to fix the program's source code accordingly, so that such hot-fixes aren't needed in the future.
One approach towards achieving this is test-driven development. However, this process is well beyond the scope of this book. Here, my task is to teach you how to write software in Io. It is not, however, how to design software. If you wish to know more about this, please see [1].
|
http://en.wikibooks.org/wiki/Io_Programming/Beginner's_Guide/Objects
|
CC-MAIN-2014-10
|
refinedweb
| 2,336
| 62.98
|
I have a python program. It stores a variable called "pid" with a given pid of a process. First of all I need to check that the process which owns this pid number is really the process I'm looking for and if it is I need to kill it from python. So first I need to check somehow that the name of the process is for example "pfinder" and if it is pfinder than kill it. I need to use an old version of python so I can't use psutil and subprocess. Is there any other way to do this?
You can directly obtain this information from the
/proc file system if you don't want to use a separate library like psutil.
import os import signal pid = '123' name = 'pfinder' pid_path = os.path.join('/proc', pid) if os.path.exists(pid_path): with open(os.join(pid_path, 'comm')) as f: content = f.read().rstrip('\n') if name == content: os.kill(pid, signal.SIGTERM)
.
|
https://codedump.io/share/FciLUXmiAPB2/1/get-the-process-name-by-pid
|
CC-MAIN-2017-13
|
refinedweb
| 165
| 84.37
|
I am trying to use mp-api and MPR.query(formula=‘Fe2O3’, fields=[…]) to query Fe2O3-related information from the Materials Project. May I ask how to retrieve the icsd_ids and corresponding BibTex Citation? They are displayed in the web app as follows but I didn’t find a relevant keyword to put in ‘fields’. Thanks!
Hi, I think the screenshot that you have shared is from the legacy site (legacy.materialsproject.org), while the mp-api is primarily designed to interact with the new Materials Project site (next-gen.materialsproject.org). Most of the information will likely be the same, but the new site is constantly being updated while the legacy site is now frozen.
Link to documentation for the API: Materials Project API - Swagger UI
For using the mp-api:
If you know the material project id (MP ID) for the Fe2O3 material of interest (for example, mp-19770), then you can query the provenance doc directly. The two key fields are “database_IDs” which will retrieve the ICSD IDs and the “references” field will retrieve the BibTex citations.
# Import from mp_api import MPRester with MPRester(API_KEY) as mpr: doc = mpr.provenance.get_data_by_id("mp-19770", field = ["database_IDs", "references"] ) # To get the ICSD ids doc.database_IDs # To get the BibTex citations doc.references
In the case that you don’t know the MP ID of the Fe2O3 material of interest, you can also use the mp-api to get all the MP IDs associated with Fe2O3 as shown below:
from mp_api import MPRester with MPRester(API_KEY) as mpr: # Query the summary doc to get the MP IDs of all Fe2O3 materials in the MP # You could filter by additional field arguments to reduce the amount entries returned # Fe2O3_docs is a list of SummaryDoc objects Fe2O3_docs = mpr.summary.search(formula="Fe2O3", fields = ["material_id"]) # Create a list of the ids mp_ids = [i.material_id for i in Fe2O3_docs] # To then get the icsd_ids and references, we would loop over the list of MP IDs, and get the provenance doc for each material # provenance_docs will be a list of ProvenanceDoc objects. with MPRester(API_KEY) as mpr: provenance_docs =[mpr.provenance.get_data_by_id(mp_id, fields =["material_id", "database_IDs", "references"]) for mp_id in mp_ids] # We could then look at one example from our query and print the MP ID, database IDs (if any) and references # Let's print out the first results from provenance_docs print(provenance_docs[0].material_id) print(provenance_docs[0].database_IDs) print(provenance_docs[0].references)
Hope my responses have helped!
Thank you very much! It is very clear!
|
https://matsci.org/t/how-to-get-icsd-ids-via-mp-api/41970
|
CC-MAIN-2022-21
|
refinedweb
| 420
| 52.29
|
Coding for Good: Working with the Sunlight Labs APIsAug 29, 2012 Python Tweet
If you're looking to get your feet wet when it comes to working with open U.S. government data, I can think of no better place to start than with the Sunlight Laps APIs. They're not kidding when they say that using their APIs is absurdly easy.
Sunlight Labs is a project of the Sunlight Foundation, an organization that has been working for several years to access public government data** - the kind of data that is freely available on state and federal web sites, but that is buried behind a Byzantine series of links or is just poorly formatted for analytical use. Sunlight has done the hard work of finding that data and collecting it, and Sunlight Labs has created the tools that make it accessible for all of us to use.
**(Their other projects include Sunlight Reporting Group, Sunlight Live and the Open House Project.)
Currently they provide five APIs accessible with Python:
- Sunlight Congress API: returns information about legislators at the federal level
- Open States API: exposes similar information at the state level
sunlight-openstates:
- Capitol Words API: gives you a look at the most-used words in Congressional sessions
sunlight-capitolwords:
- Transparency Data API: specific data sets, such as campaign contributions and lobbying records
python-transparencydata:
- Real Time Congress API: data such as floor updates, committee hearings, floor video, bills, votes, amendments, and various documents
This script example uses the
openstates API to
- get all available data about legislators at the state level
- parse out only what's needed to do a summary count of party affiliations per state
- and return that information as:
- a JSON object that can be used for visualizations
- a table suitable for embedding into an HTML page
To use any of the libraries, you'll first need to get an API key:
It only took a few minutes for my key to arrive in the mail. Once you've got it, you have a few options for setting it (I used ~/.sunlight.key):
Then install the
sunlight module (this won't apply to the Transparency Data and Real Time Congress APIs) using either
pip install or checking out the project from Github:
With all that done, you're ready to go. Let's pop open an interpreter and play around with the given example:
>>> import sunlight >>> nc_legs = sunlight.openstates.legislators(state='nc')
As you'll see, this returns a list of dicts, each dict containing a lot of publicly availably information - such as name, district, office address, party affiliation, in some cases even a picture - about each state legislator in the state of North Carolina:
>>> nc_legs >>> [{u'leg_id': u'NCL000242', u'first_name': u'Barbara', u'last_name': u'Lee', u'middle_name': u'', u'district': u'12', u'chamber': u'lower', u'url': u'', u'created_at': u'2012-08-10 02:06:05', u'updated_at': u'2012-08-29 02:09:04', u'email': u'Barbara.Lee@ncleg.net', u'+notice': u'[\xa0Appointed\xa008/06/2012\xa0]', u'state': u'nc', u'offices': [{u'fax': None, u'name': u'Capitol Office', u'phone': u'919-733-5995', u'address': u'NC House of Representatives\n300 N. Salisbury Street, Room 613\n\nRaleigh, NC 27603-5925', u'type': u'capitol', u'email': None}], u'full_name': u'Barbara Lee', u'active': True, u'party': u'Democratic', u'suffixes': u'', u'id': u'NCL000242', u'photo_url': u''}, ...
One simple but powerful API call and we've already got so much information at our fingertips. So what can we do with all that data? Well, since the ultimate goal is to get a count of party affiliations per state, let's start by creating a list of state abbreviations. Then for each state in that list, we can make the same API call to get all the legislative data, and write a subset of that data - the state, the representative's full name, and their party affiliation - to a new dict.
states = ["AL", "AK", "AZ", "AR", ...] def find_state_reps(): # Start by instantiating the new dict: statereps = {} for s in states: legs = sunlight.openstates.legislators(state=s) # If you print 'legs', you'll see a dict with loads of # contact information for each state representative. # For my purposes, I'm only collecting name and # party affiliation. # This dict will hold {name:party} pairs for each state l = {} for leg in legs: name = leg['full_name'] try: party = leg['party'] except KeyError: # In some cases, 'party' is missing party = None l[name] = party statereps[s] = l # At this point, the 'statereps' dict contains: # {'state':{'rep_name':'party_affiliation'}} # for each state.
But you know what? Sunlight Labs is providing this API as a free resource, and I don't want to take advantage of their hard work by pounding their servers with a new set of 50 requests every time I run this script. So I'm going to write the dict to a file so that data doesn't have to be pulled from the API again.
outfile = 'state_reps_list.txt' f = open(outfile, 'w') f.write(str(statereps)) f.close()
Now, as I'm developing, I can just check to see if I have that file in place and use the dict from there. And when it's time to refresh the data, I can just delete the file and hit the API again to rebuild the
statereps dict from scratch:
import os.path f = os.path.exists(outfile) # If we've already got the list stored in a file, # just refer to that file # instead of hitting the API again: if f: # Get the file content and return it as the statereps dict f = open(outfile, 'r') statereps = eval(f.read()) f.close() else: # Hit the API for the data ... return statereps
My
statereps dict looks something like this, but obviously contains a lot more data:
{ 'WA': {u'Bruce Chandler': u'Republican', u'Derek Kilmer': u'Democratic', ...}, 'WV': {u'Mike Green': u'Democratic', u'Mark Wills': u'Democratic', ...}, ... }
Now I can pass that data into another function that returns the summary count of party affiliations among state legislators per state (e.g., state: dems=x, repubs=y, other=z):
import re def partycount(reps_dict): partycount = {} for s in reps_dict: # Create lists to hold the party members on a per-state basis: demlist = [] replist = [] otherlist = [] for k in reps_dict[s]: # s -> state abbreviation # k -> full name # reps_dict[s][k] -> party affiliation if reps_dict[s][k]: # Use the re module to determine if either of these strings # appears in the party affiliation value dem = re.search('Dem', reps_dict[s][k]) rep = re.search('Repub', reps_dict[s][k]) # And funnel those values into the appropriate lists if dem: # If the legislator's party affiliation contains the substring 'Dem', # add their name to the 'dem' list: demlist.append(k) elif rep: # If the legislator's party affiliation contains the substring 'Rep', # add their name to the 'rep' list: replist.append(k) else: # If neither substring appears in the legislator's party affiliation, # add their name to the 'other' list otherlist.append(k) c = {} # Get the length of each list and you have a count of # dems vs. repubs vs. other for this state: c['Democrats'] = len(demlist) c['Republicans'] = len(replist) c['Other'] = len(otherlist) partycount[s] = c return partycount
And now we've got (yet another) dict that looks like this:
{ 'WA': {'Republicans': 64, 'Other': 0, 'Democrats': 83}, 'DE': {'Republicans': 22, 'Other': 0, 'Democrats': 40}, 'DC': {'Republicans': 0, 'Other': 2, 'Democrats': 10}, 'WI': {'Republicans': 74, 'Other': 1, 'Democrats': 55}, ... }
Before I return that
partycount dict, I can insert this somewhat ugly bit of code into the function to generate an HTML page with all that data embedded in a table:
# This count data could just as easily be output as # a template context object, or printed to stdout output = "<html><body><table>" output += "<tr><td><b>STATE</td><td><b>Republicans</b></td> \ <td><b>Other</b></td><td><b>Democrats</b></td></tr>" # Let's sort the keys while we're at it, # so the states appear in alphabetical order: for key in sorted(partycount.iterkeys()): output += "<tr><td align='center'>%s</td>" % (key) for k in partycount[key]: output += "<td align='center'>%s</td>" % (partycount[key][k]) percentlist.append(partycount[key][k]) output += "</tr>\n" output += "</table></body></html>" f = open('redvblue.html', 'w') f.write(str(output)) f.close()
One other thing - I can also take that first
statereps dict and convert it to json - that might be handy for doing visualizations down the road:
import simplejson as json def converttojson(reps_dict): """ Take a dict object and convert it to JSON """ result = json.dumps(reps_dict, sort_keys=False, indent=4) return result
Some resources for doing visualizations with the resulting JSON object:
Here are a few more things that I could see adding to this script:
- Add a line reading "Data current as of [date]" to the top of the html - use the filesystem date of the 'state_reps_list.txt' file (or the current date if you're getting the data fresh from the API):
datetime.datetime.fromtimestamp(os.path.getmtime(outfile))
Get unemployment data (source: US Department of Labor, Bureau of Labor Statistics) and compare on a per-state basis to see if there is any correlation between unemployment rates and dominance of any particular party at the state level:
- Use the Transparency Data API to see how campaign contributions compare from state to state
My complete script, minus the changes mentioned above (which I have already implemented locally) can be found here:
And incidentally, here's that table output:
State legislative data current as of 2012-08-29 11:48:26
|
http://www.mechanicalgirl.com/post/coding-good-working-sunlight-labs-apis/
|
CC-MAIN-2021-17
|
refinedweb
| 1,614
| 51.21
|
20 July 2012 10:54 [Source: ICIS news]
By Ong Sheau Ling
SINGAPORE (ICIS)--Spot naphtha prices in Asia hit a 10-week high on Friday, buoyed by strong physical demand for August as regional crackers ramp up production, while refinery outages in Japan and lower export volumes from India shaved supply, market sources said.
At the close of trade on Friday, the first-half September naphtha contracts were assessed at $890.00-892.00/tonne (€720.90-722.52/tonne) CFR (cost and freight) ?xml:namespace>
Market sentiment is bullish as premiums on tenders and spot August purchases are on the rise, traders said.
South Korean cracker operators settled purchases at premiums that were 33-56% higher than last week. Indian refiners also closed tenders at higher premiums, up 7-18% from the previous week.
“Higher run rates at crackers will mean more demand for naphtha,” a Singapore-based trader said.
Among the Asian cracker operators that bought spot cargoes this week are
Apart from the regular buyers, industry players are expecting
The affected Mizhushima-based refinery supplies naphtha feed to Mitsubishi Chemicals and Asahi Kasei’s crackers, while the affected Chiba-based refinery supplies feed to its downstream cracker, to Mitsui Chemical’s cracker to a small extent, traders said.
Major regional naphtha exporter Reliance Industries Ltd (RIL), meanwhile, has been staying off the market this month, further reducing availability of the spot material in
“Healthy [naphtha crack] spread and good product margins are supporting the [naphtha] prices,” a South Korean trader said.
For the week ended 13 July, ethylene margins were assessed at $226/tonne, up $15/tonne from the previous week, according to ICIS.
On 20 July, the naphtha crack spread versus September Brent crude futures was at $88.73/bbl, up slightly from the previous day. The spread was off the two-month high of $93.43/bbl that was hit on 18 July, but a marked improvement from a three-and-a-half year-low of $3.27/bbl recorded on 13 July, according to ICIS.
The widening of the inter-month spread is also a consequence of the bullish market.
The inter-month spread between the first-half September and first-half October contracts was assessed at $11.50/tonne – a 10-week high – on Friday, according to ICIS.
The speed and magnitude of the price increase, however, are worrying regional cracker operators and traders.
Prices have jumped $193.50/tonne or 28% from its 20-month low at $697.50/tonne CFR Japan on 22 June to reach $891.00/tonne CFR Japan on 20 July, ICIS data showed.
Lacklustre petrochemical demand from Asia’s largest consumer –
On 20 July close, spot ethylene prices were $10/tonne higher week on week at $1,070-1,090/tonne
The third quarter is expected to see limited spot purchases from
FPCC has enough naphtha stocks to last up to September, and is heard to be buying liquefied petroleum gas (LPG), an alternative and cheaper feedstock to naphtha.
Another Taiwanese producer CPC is likely to skip spot purchases in the current quarter as it has ample supply, market sources said.
Come August, the region’s naphtha supply will be boosted by about 590,000 tonnes of European arbitrage cargoes that will likely weigh on prices.
“The price upside may only last for a few more weeks, on the condition that energy values stay strong,” another
(
|
http://www.icis.com/Articles/2012/07/20/9579744/asia-naphtha-hits-10-week-high-on-strong-physical-demand.html
|
CC-MAIN-2014-15
|
refinedweb
| 570
| 60.55
|
Created on 2009-08-13.23:45:36 by danny, last changed 2009-08-25.08:02:01 by cgroves.
In Jython, it doesn't seem be possible to flush the buffer of a
'write'-mode pipe on request the way Python does (pipe.flush() has no
effect).
At bottom is a simple program to create a pipe, write to it, read from
it, and close. The behavior in Python is as expected:
$ python post.py
Read this line from child: "Read this line from parent: 'Test line'"
Waiting 10 seconds
Exiting
In Jython, though, the program hangs until quit forcibly:
$ jython post.py
Traceback (most recent call last):
File "/usr/local/sem/CampaignManager/src/test/Pipes/post.py", line 6, in ?
line = sys.stdin.readline()[:-1]
KeyboardInterrupt
If I close the pipe, instead of just flushing, though, the behavior is
correct, which leads me to think that 'flush()' is being ignored. This
is important for being able to use pipes.
Cheers,
Danny
import os
import sys
import time
if len(sys.argv) > 1: # Child
line = sys.stdin.readline()[:-1]
sys.stdout.write("Read this line from parent: '%s'" %line)
sys.stdout.flush()
else:
pin,pout = os.popen2('/usr/bin/jython
/usr/local/sem/CampaignManager/src/test/Pipes/post.py --child')
pin.write("Test line\n")
pin.flush()
#pin.close()
print 'Read this line from child: "%s"' %pout.readline()
print "Waiting 10 seconds"
time.sleep(10)
print "Exiting"
What operating system are you on? On my local OSX I seem to get the
same behavior from python and jython.
oops, scratch that, I failed to port the script over to my system well
enough. I was able to reproduce the hang.
Thanks Frank. FYI here's my os info in case it's helpful:
Linux yoda 2.6.18-128.el5 #1 SMP Wed Jan 21 10:41:14 EST 2009 x86_64 x86_64
x86_64 GNU/Linux
On Mon, Aug 17, 2009 at 2:38 PM, Frank Wierzbicki <report@bugs.jython.org>wrote:
>
> Frank Wierzbicki <fwierzbicki@users.sourceforge.net> added the comment:
>
> oops, scratch that, I failed to port the script over to my system well
> enough. I was able to reproduce the hang.
>
> _______________________________________
> Jython tracker <report@bugs.jython.org>
> <>
> _______________________________________
>
Should be fixed as of r6718. Thanks for the report!
|
http://bugs.jython.org/issue1433
|
CC-MAIN-2016-07
|
refinedweb
| 381
| 78.75
|
I've been trying to understand why I'm getting wrong output with this function. It must be somewhere where I count the chars in the array.
Here it goes.
So, when I input a random choice of chars, e.gSo, when I input a random choice of chars, e.gCode:
int clean_array(char array[], char clean[], int size)
{
int i, j = 0;
/* j is to count index of the array cleaned out of */
/*blanks and punct marks. i is for the original array */
for(i = 0; i < size; i++)
{
if( isalpha(array[i]) )
{ /* if char is a letter, put it in the */
clean[j] = array[i]; /* other array. Increment its index */
j++;
}
}
clean[j] = '\0'; /* as index is incremented already, just */
/* close the string */
return (j-1); /* return the number of chars in string */
}
jhy ut!
and store it in the original array, the program cleans it up correctly and my result is (using puts):
jhyutP
The last letter I believe is something random, maybe from outside of the clean string.
What did I do wrong with the subscripting?
|
http://cboard.cprogramming.com/c-programming/62025-clean-array-blanks-array-subscripting-printable-thread.html
|
CC-MAIN-2016-40
|
refinedweb
| 180
| 79.8
|
Automated Setup for pgloader
Another.
Migrating the schema
For the schema parts, I’ve been using mysql2pgsql with success for many years. This tool is not complete and will do only about 80% of the work. As I think that the schema should always be validated manually when doing a migration anyway, I happen to think that it’s good news.
Getting the data out
Then for the data parts I keep on using pgloader. The data is never quite right, and the ability to filter out what you can’t readily import in a reject file proves itself a a must have here. The problems you have in the exported MySQL data are quite serious:
First, date formating is not compatible with what PostgreSQL expects,
sometimes using
20130117143218 instead of what we expect:
2013-01-17 14:32:18, and of course even when the format is right (that seems to depend
on the MySQL server’s version), you still have to transform the
0000-00-00 00:00:00 into
NULL.
Before thinking about the usage of that particular date rather than using
Then, text encoding is often mixed up, even when the MySQL databases are said to be in latin1 or unicode, you somehow always end up finding texts in win1252 or some other code page in there.
And of course, MySQL provides no tool to export the data to
CSV, so you have
to come up with your own. The
SELECT INTO OUTFILE command on the server
produces non conforming CSV (
\n can appear in non-escaped field contents),
and while the
mysql client manual page details that it outputs
CSV when
stdout is not a terminal, it won’t even try to quote fields or escape
\t
when they appear in the data.
So, we use the mysqltocsv little script to export the data, and then use that data to feed pgloader.
Now, we have to write down a configuration file for pgloader to know what to load and where to find the data. What about generating the file from the database schema instead, using the query in generate-pgloader-config.sql:
with reformat as ( select relname, attnum, attname, typname, case typname when 'timestamptz' then attname || ':mynull:timestamp' when 'date' then attname || ':mynull:date' end as reformat from pg_class c join pg_namespace n on n.oid = c.relnamespace left join pg_attribute a on c.oid = a.attrelid join pg_type t on t.oid = a.atttypid where c.relkind = 'r' and attnum > 0 and n.nspname = 'public' ), config_reformat as ( select relname, '['||relname||']' || E'\n' || 'table = ' || relname || E' \n' || 'filename = /path/to/csv/' || relname || E'.csv\n' || 'format = csv' || E'\n' || 'field_sep = \t' || E'\n' || 'columns = *' || E' \n' || 'reformat = ' || array_to_string(array_agg(reformat), ', ') || E'\n' as config from reformat where reformat is not null group by relname ), noreformat as ( select relname, bool_and(reformat is null) as noreformating from reformat group by relname ), config_noreformat as ( select relname, '['||relname||']' || E'\n' || 'table = ' || relname || E' \n' || 'filename = /path/to/csv/' || relname || E'.csv\n' || 'format = csv' || E'\n' || 'field_sep = \t' || E'\n' || 'columns = *' || E' \n' || E'\n' as config from reformat join noreformat using (relname) where noreformating group by relname ), allconfs as ( select relname, config from config_reformat union all select relname, config from config_noreformat ) select config from allconfs where relname not in ('tables', 'wedont', 'wantto', 'load') order by relname;
To work with the setup generated, you will have to prepend a global section for pgloader and to include a reformating module in python, that I named mynull.py:
# Author: Dimitri Fontaine <[email protected]> # # pgloader mysql reformating module def timestamp(reject, input): """ Reformat str as a PostgreSQL timestamp MySQL timestamps are ok this time: 2012-12-18 23:38:12 But may contain the infamous all-zero date, where we want NULL. """ if input == '0000-00-00 00:00:00': return None return input def date(reject, input): """ date columns can also have '0000-00-00'""" if input == '0000-00-00': return None return input
Now you can launch
pgloader and profit!
Conclusion
There are plenty of tools to assist you migrating away from MySQL and other databases. When you make that decision, you’re not alone, and it’s easy enough to find people to come and help you.
While MySQL is Open Source and is not a
lock in from a licencing
perspective, I still find it hard to swallow that there’s no provided tools
for getting data out in a sane format, and that so many little
inconsistencies exist in the product with respect to data handling (try to
have a
NOT NULL column, then enjoy the default empty strings that have been
put in there). So at this point, yes, I consider that moving to
PostgreSQL
is a way to
free your data:
|
https://tapoueh.org/blog/2013/01/automated-setup-for-pgloader/
|
CC-MAIN-2022-27
|
refinedweb
| 796
| 52.43
|
>
IndexOutOfRangeException: Array index is out of range, it's my problem with this script at 37th line (GameObject Newaster = Instantiate (aster[spawnObject], spawnPoints [spawnIndex].position, spawnPoints [spawnIndex].rotation) as GameObject;):
using System.Collections; using UnityEngine; using UnityEngine.Networking; using System.Collections.Generic;
public class Asteroid : NetworkBehaviour {
public float timeTospawn = 0.5f;
public Transform[] spawnPoints;
public GameObject[] aster;
public List<Transform> possibleSpawns = new List<Transform>();
// Use this for initialization
void Start()
{
//fill possible spawn
for (int i = 0;i<spawnPoints.Length;i++)
{
possibleSpawns.Add(spawnPoints[i]);
}
InvokeRepeating("SpawnItems", timeTospawn, timeTospawn);
}
// Update is called once per frame
void Update () {
}
void SpawnItems()
{
int spawnIndex = Random.Range(0, possibleSpawns.Count);
int spawnObject = Random.Range(0, aster.Length);
GameObject Newaster = Instantiate (aster[spawnObject], spawnPoints [spawnIndex].position, spawnPoints [spawnIndex].rotation) as GameObject;
Newaster.GetComponent<DestroyAsteroid>().mySpawnPoint = possibleSpawns[spawnIndex];
possibleSpawns.RemoveAt(spawnIndex);
}
}
So the issue is that the array element you're trying to access doesn't exist.
One possible issue: Does it happen time its called or does it work sometimes? Random.Range is inclusive which means it can return the highest possible value. Say your array has 10 elements, so aster.Length = 10. that means spawnObject has a chance to equal 10. but the 10th element in the array is element 9, because it counts from 0, not 1. I'd try:
int spawnObject = Random.Range(0, aster.Length - 1);
The list may have a similar issue.
Alternatively:
You're using possibleSpawns.Count to set spawnIndex, and then using spawnIndex to access an element from spawnPoints.
Is spawnPoints definitely longer than possibleSpawns?
I'd be debug.logging all the array lengths and index values and making sure they are within each others ranges. Sorry I can't help more but it kinda depends on what's in your public arrays
Thank's to reply, but it don't works, now it's this line who doesnt works with the same error : GameObject Newaster = Instantiate (aster[spawnObject], spawnPoints [spawnIndex].position, spawnPoints [spawnIndex].rotation) as GameObject;
Can you send me all the line changed
Because I'm not English and I didn't understand all you said
There is nothing necessarily wrong with than line. the issue may be the length of your public arrays. I suggest debug.logging out the length of those arrays and the indexes to make sure they are correct before the line it errors on.
keeping in mind that the 10th element in an array is element 9 (it will always be one less)
Answer by AgarFun
·
Jul 03, 2017 at 05:56 AM
Can you send me all the line do I add a timer /wait time & limiter for Spawning?
1
Answer
How to Instantiate prefabs with function Update?
1
Answer
How to stop instantiate spawn ?
1
Answer
I need to Stop and Restart a Co-Routine at anytime
1
Answer
InvokeRepeating stops repeating???? 4-5 instances of an object in array breaks it. why?
2
Answers
|
https://answers.unity.com/questions/1372980/indexoutofrangeexception-array-index-is-out-of-ran-46.html
|
CC-MAIN-2019-22
|
refinedweb
| 486
| 51.34
|
Hi everyone, I've been taking a class on cprogramming and have been successful up until classes. I am now learning about stacks but was told to apply it to the program below so that the output
1
2
3
4
will print out
4
3
2
1
I understand the concept in which the output will first be stored in an array [1,2,3,4] but am not entirely sure on how to retreive it without making the program really long. I know about pop functions but have only used this with push empty and full functions in class. I am just having problems on trying to figure out how to implement it here. Will I have to use a pop function to remove the numbers from top to bottom and then print them out that way? Can I complete this by just adding the pop function. please let me know if you have any suggestions
Any help will be greatly appreciated
Thanks
Code:#include * "" "" using namespace std; const string space = ""; void primefactor(int number); bool isprime(int number); int main() { int n; cout << "Enter a number > 1000 -> "; cin >> n; primefactor(n); return 0; } void primefactor(int n) { bool truePrime = trueprime(n); int prime = n; int i = 2, j; double squareRoot = sqrt(static_cast<double>(n)); int count = 0; cout << "The prime factorization of " << n << " is:" << endl; if(truePrime) cout << space << n << " is a prime number." << endl; else { while((prime > 0) && (i <= n)) { if((prime % i) == 0) { count++; for(j = 0; j < count; j++) cout << space; cout << i << " is a factor" << endl; prime /= i; } else i++; } } } bool trueprime(int n) { int i; for(i = 2; i < n; i++) { if((n % i) == 0) return false; } return true; }
|
https://cboard.cprogramming.com/cplusplus-programming/54363-reverse-output-stack.html
|
CC-MAIN-2018-05
|
refinedweb
| 287
| 62.95
|
I was wondering if somebody can help me out whit this. i am not a script-er and i'm kinda stuck on this. I wanted to make a script whit 2 variables (one for a 3d object and one for a texture) that i can assign on a Ngui button. then when pressed it, it swaps the current texture for the one in the variable. This is the script i found online and changed a bit. Only like i said i am bad at scripting and cant get it to work.
using UnityEngine; using System.Collections;
public class swap texture: MonoBehaviour { // Cache our renderer private Renderer rend;
// Store the textures in an array. Then you can quickly and more or remove some public Texture2D[] textures; public GameObject[] cube; // Awake void Awake (){ // Get the renderer rend = gameObject.GetComponent< Renderer>();
// If none, then add one if (rend == null) rend = gameObject.AddComponent< MeshRenderer>(); }
void OnClick()
{
ChangeTexture(0);
}
// Update function //void Update (){ // if (Input.GetButton("w")) // ChangeTexture(0); //else if (Input.GetButton("a")) // ChangeTexture(1); // else if (Input.GetButton("s")) // ChangeTexture(2); // else if (Input.GetButton("d")) // ChangeTexture(3); //}
// Change texture void ChangeTexture ( int inx ){ // Just make a small check, that we don't try to take a non-existing texture if (inx >= textures.Length) return;
// Change texture rend.material.mainTexture = textures[inx]; } } As you can see originality it used the keyboard to swap the textures. And it worked fine if i assigned it to the 3D object. Only this is not a option for me cause i have multiple 3D objects that need to swap a texture and i cant use a keyboard for it.
Hope that some can help me whit this ! thanks in.
Changing GameObject texture?
4
Answers
How can i change detail texture?
1
Answer
How to swap texture on object upon collision?
1
Answer
Change Material of an Object.
5
Answers
referring to a particular material
0
Answers
|
https://answers.unity.com/questions/252304/change-texure-on-ngui-buttun-press.html
|
CC-MAIN-2020-05
|
refinedweb
| 320
| 67.96
|
Part 1: Boost python with c/c++ bindings. Beginners’ friendly guide to start embedding c/c++ shared libraries in python.
Python is a very easy, but versatile programming language used almost everywhere. Being an interpreted language it naturally lags behind in terms of speed of execution.
In this tutorial you will learn on how to make your python program faster using c/c++ code. Let’s start.
Note: here are the requirements you need to meet:- Visual Studio 2019 (don’t confuse it with Visual Studio Code)- installed python- pycharm community edition- OS: windows 10
There are 3 things we need to do:
- create a c/c++ shared library in Visual Studio
- create python module to consume the c/c++ code (shared library)
- run and check how program works
1: create c/c++ shared library in Visual Studio
- Add new DLL (Dynamic-Link Library) project in Visual Studio
Note: in windows shared library has extension .dll whereas on Linux it’s .so — here more info.
Now create two files with implementation given below:
- test.h (header file)
- test.cpp (cpp file)
Note: This is just an example of sum function implemented in c++ and used in python. In later tutorials we’re going to have more advanced use cases.
// test.h#pragma once
#include <string>int sum(int, int);
Let’s analyse what’s in the test.h:
- pragma once makes sure that the file is loaded only once at a precompiling step. Simply, c/c++ merge all source files into one file before compilation to binary code, and it makes sure there is only one copy of this file.
- #include <string> this is a header from the source code. Simply, it allows using strings and its corresponding functions.
- int sum(int, int) this is a very simple function declaration that returns integer value as a sum of two other integers.
//test.cpp#include “pch.h”
#include <string>#define DLLEXPORT extern “C” __declspec(dllexport)DLLEXPORT int sum(int a, int b) {
return a + b;
}
Let’s analyse what’s in the test.cpp:
- #include “pch.h” this is auto generated precompiled header needed for shared library (dynamic library) using Visual Studio. An explanation is far beyond this tutorial. One thing to remember is to make sure that #include “pch.h” is at the top of the file cause everything above will not compile (this is just how compilation works :) )
- #include <string> please look above
- #define DLLEXPORT extern “C” __declspec(dllexport) — define means that DLLEXPORT == extern “C” __declspec(dllexport) and now it may be used below in the code. extern c means that our function sum will have the same name in python program (in a nutshell). __declspec(dllexport) makes sure that the function will be visible to python.
- int sum(int a, int b){ return a+b} this is just a function implementation of our function.
And now let’s compile our simple program. You need to click a green triangle at the middle top in Visual Studio.
We are done with c++!
Note: when you build your project make sure to have the same build configuration as python. For example if you have python 64 bit you need to build c/c++ on 64 bit as well. You can check your python build version on a python console at the bottom of the Pycharm:
2: create python module to consume the c/c++ code (shared library)
- create new python project in the IDE
- create python file example_1.py with below implementation
from ctypes import *
if __name__ == "__main__":
mydll = cdll.LoadLibrary(r"C:\Users\PC\PycharmProjects\shared_library_example\your_project_name.dll")
print(mydll.sum(10, 10))
Let’s analyse what’s in the example_1.py:
- from ctypes import * — is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
- cdll.LoadLibrary — this function loads our shared library.
- print(mydll.sum(10, 10)) — here we’re printing the sum of 10 and 10 defined in the shared library we’ve just loaded.
3: run and check how program works
To run python program we have two options:
- from IDE (example shown in pycharm)
- from terminal
- run python program from a pycharm IDE
At the right top bottom there should a green triangle. Click it, and you will see the script output at the bottom of the pycharm IDE.
- from a terminal
Open a terminal (search for PowerShell in the windows).
Then type cd location/of/your/python/program like:
Then type python ./example_1.py
In the next tutorial I will show a reverse — how to run python from c/c++ program.
Wish you best!
|
https://maciejzalwert.medium.com/part-1-boost-python-with-c-c-bindings-d68245bc08f2?source=---------8----------------------------
|
CC-MAIN-2021-25
|
refinedweb
| 783
| 64.2
|
While the syntax described for filter in the previous post allows you to do some nifty things, there are still a few more things that an ADO.NET Data Service supports.
The first are operators for arithmetic: +, -, *, /, %. These also have mnemonics:
So the following is a very elaborate but silly filter:
/Region?$filter=(1 add 1 eq 2) and (1 sub 1 eq 0) and (1 mul 1 eq 1) and (1 div 1 eq 1) and (1 mod 1 eq 0)
There are also a number of built-in functions that you can use. These are fairly simple and are likely to be implemented in some way by the data provider for your data service. These functions must be recognized by the provider for the data source to be usable; the provider for Linq to Objects and Linq to Entities already recognize them and in the latter case, the Entity Framework is able to have them execute directly on the database server, for best efficiency.
The functions are the following. I'll give an obvious example of each that always evaluates to true, just to show how they can be used. Remember you can always use properties or more complex expressions in place of constants.
String Functions
contains - checks whether a string is contained in another one
/Region?$filter=contains('abc', 'ab')
endswith - checks whether a string ends with another one
/Region?$filter=contains('abc', 'bc')
startswith - checks whether a string starts with another one
/Region?$filter=contains('abc', 'b')
indexof - gets the index of a substring (zero-based)
/Region?$filter=2 eq indexof('abc', 'c')
insert - inserts a string inside another one
/Region?$filter='abcd' eq insert('ad', 1, 'bc')
tolower - converts a string to lowercase
/Region?$filter='abc' eq tolower('ABC')
toupper - converts a string to uppercase
/Region?$filter='ABC' eq toupper('abc')
trim - trims leading and trailing whitespace from a string
/Region?$filter='abc' eq trim(' abc ')
remove - removes characters from a string from a given offset (length optional)
/Region?$filter='abc' eq remove('a00bc', 1, 2)
/Region?$filter='a' eq remove('a00bc', 1)
substring - gets characters from a string from a given offset (length optional)
/Region?$filter='00' eq substring('a00bc', 1, 2)
/Region?$filter='00bc' eq substring('a00bc', 1)
concat - concatenates two strings
/Region?$filter='abc' eq concat('ab', 'c')
length - gets the length of a string
/Region?$filter=3 eq length('abc')
DateTime Functions
year - gets the year of a DateTime value
/Region?$filter=1990 eq year('1990-12-20')
month - gets the month of a DateTime value
/Region?$filter=12 eq month('1990-12-20')
day - gets the day of the month of a DateTime value
/Region?$filter=20 eq day('1990-12-20')
hour - gets the hour of the day of a DateTime value
/Region?$filter=14 eq hour('1990-12-20T14:10:20')
minute - gets the minute of the hour of a DateTime value
/Region?$filter=10 eq minute('1990-12-20T14:10:20')
second - gets the second of the minute of a DateTime value
/Region?$filter=20 eq second('1990-12-20T14:10:20')
Math Functions
round - rounds a value to its nearest integral value
Region?$filter=2 eq round(1.8)
floor - gets the largest integral value (in magnitude) that is less than or equal a given value
/Region?$filter=1 eq floor(1.8)
/Region?$filter=-2 eq floor(-1.8)
ceiling - gets the smallest integral value (in magnitude) that is greater than or equal a given value
/Region?$filter=2 eq ceiling(1.8)
/Region?$filter=-1 eq ceiling(-1.8)
Type Functions
isof - checks whether something is of the given type - useful in inheritance scenarios
/Region?$filter=isof('Model.Region') -> in the ATOM payload, I see 'Model.Region as the adsm:type - this depends on the namespace for your types)
cast - casts something to a given type - useful in inheritance scenarios
/Region?$filter=(cast('Model.Region'))/RegionID ge 0
Whew! Well, that's it for functions. Put them to good use!
This post is part of the transparent design exercise in the Astoria Team. To understand how it works and how your feedback will be used please look at this post.
Maybe I am too lazy or not efficient enough searching the Internet but It is hard to find a list of all
|
https://blogs.msdn.microsoft.com/marcelolr/2008/01/15/arithmetic-and-built-in-functions-for-filter/
|
CC-MAIN-2017-47
|
refinedweb
| 718
| 58.48
|
XElement Refresher
WEBINAR:
On-Demand
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017
In this day and age of JSON data everywhere, a lot of developers are leaving XML behind. There are, however, still some areas where XML is still used quite heavily; web and application configuration files are a start. There are also many financial institutions that still do pretty much everything internal to a core banking system in XML.
.NET has always had great support for handling XML data, and there are certainly more than a handful of different APIs for dealing with it. In this article, I'm going to go over a quick refresher of using the XElement interface, and in the process introduce you to one place where XML is still used that's potentially quite useful to you.
Constructing XML
The XElement interface makes it almost childlike to construct valid XML. Let's imagine for a moment we wanted to create something that looks similar to the following XML data:
<person> <name>Peter Shaw</name> <email>shawty.d.ds@googlemail.com</email> </person>
Using XElement to construct this is a simple as using the following code:")); Console.Write(myRoot.ToString()); } } }
If we press F5 to run this, we get the following in our console window:
Figure 1: The results of running the preceding code
Very easy, and very simple to understand.
Adding nested elements is not any harder either.")); XElement nestedElement = new XElement("location"); nestedElement.Add(new XElement("area", "North Durham")); nestedElement.Add(new XElement("country", "United Kingdom")); myRoot.Add(nestedElement); Console.Write(myRoot.ToString()); } } }
Which, when run, gives us the following:
Figure 2: After running the code containing nested elements
By combining multiple levels of elements, it's possible to very simply build an XML structure to match anything you need; then, by using the 'ToString' method on the root element, get an appropriately formatted string ready to save into a file. XElement also makes it easy to parse XML. To demonstrate that, however, we need a source of XML data.
Many of you who have a Gmail account may not know this, but it's possible to get an XML summary of your Gmail inbox, and very easily too. If you point your browser at
and enter your Gmail user name and password when prompted, you can take a look at the output in your browser.
Figure 3: Viewing the preceding output in a browser
You can easily retrieve this feed in a .NET program by using the WebClient class and an old-fashioned base64 encrypted access request. Needless to say, you must do this over HTTPS. In fact, Google doesn't give you any choice; it's HTTPS or bust. You can grab the XML data by using the following code:
string gmailFeed; using (WebClient client = new WebClient()) { client.Credentials = new NetworkCredential("user", "password"); gmailFeed = client.DownloadString (""); }
If you still have problems accessing the feed when using this procedure, you may also need to configure your GMail account settings to allow 'Less Secure apps' to access the account. Generally, if this is the case, you'll get an automatic EMail to your alternative email address, warning you of the fact.
Figure 4: Google-generated warning
The second link in the EMail will take you to the following page:
Figure 5: The Google page where you can allow access
where you can allow your application to have access. Once you have the received XML in a regular string, you then can parse it using XElement as follows:
XElement gmailDoc = XElement.Parse(gmailFeed);
Because the XML is an 'Atom' feed, you'll also need to create a namespace:
XNamespace ns = "";
Then, with all that done, it's simply just a matter of extracting the information you need. The following code shows the inbox title and message count, then outputs a summary of all the waiting emails available.
Console.WriteLine(gmailDoc.Element(ns + "title").Value); Console.WriteLine("{0} {1}", gmailDoc.Element(ns + "fullcount").Value, gmailDoc.Element(ns + "tagline").Value); Console.WriteLine(); foreach (XElement entry in gmailDoc.Elements(ns + "entry")) { Console.WriteLine(entry.Element(ns + "title").Value); Console.WriteLine(entry.Element(ns + "summary").Value); Console.WriteLine(entry.Element(ns + "author").Element(ns + "email").Value); Console.WriteLine(); }
Grabbing a single element is as easy as using the 'Element' method with the name (optionally prefixed with a namespace) of the element you wish to retrieve. To get an IEnumerable collection, use the 'Elements' method instead. There are others, too, but I leave exploring them to you, the reader.
Got something in .NET you'd like to know more about, or a trick you'd like to share, feel free to drop a comment in the comments section below or bend my ear on Twitter where I can usually be found as @shawty_ds. Let me know if there's anything you'd like to see covered in this column.
There are no comments yet. Be the first to comment!
|
https://www.codeguru.com/columns/dotnet/xelement-refresher.html
|
CC-MAIN-2018-05
|
refinedweb
| 829
| 52.9
|
Hello!, i need help i have been stuck for about a week with this, i used multiple plugins on a proyect and those plugins uses the AndroidManifest.xml, so when i import a plugin it overwrites the old AndroidManifest, so i have picked all of them individually and im looking a way to merge them all.
The plugins im using are:
Google Play OBB Downloader
Everyplay
Mobile Social Plugin
GameAnalytics Unity Package
Qualcomm Vuforia
Yeah i know thats a lot of plugins, i have tried merging the files manually but i dont really know how it works so the new merged AndroidManifest didnt work, i have been investigating and i think it has to do with the < intent-filter >, from what i have read it should be only one < activity > with the < intent-filter > on the entire < application >.
Im not even sure that if merging those files the way i did its the right way to do it, i need someone to point me in the right direction.
Im uploading here all the manifests and the one i created merging all those, if you need more information or another file just tell me and i will upload it or give the information needed, Thank You! :)
Manifests.zip
Ever figure this out? I'm trying to merge Unibill and Everyplay plugins but I haven't a clue where to start.
I'm trying to merge Vuforia and facebook
I managed to connect Facebook and Vuforia i am going to post my answer later today! :)
Answer by Lucas_Gaspar
·
Aug 08, 2014 at 09:00 PM
Hello, i managed to merge two plugins who used the actions "android.intent.action.MAIN" and "android.intent.category.LAUNCHER" (that is the main conflict, you can have any number of activities but just only one with those two actions) so here is what i did.
The plugins i merged where Mobile Social Plugin and the Vuforia plugin.
I mailed the developer of the Mobile Social Plugin (wich i recommend) asking for advice, he told me that i should recompile the .jar file but include all other plugins there. In the same package of the plguin there is a .zip called "ANEclipseProject.zip" where the entire Eclipse proyect was located.
I needed Eclipse so i downloaded Eclipse for Android from here AndroidSDK+Eclipse i created a new Eclipse proyect and added the files i got from the zip, then i added the .jar files of the Vuforia plugin ("QCARUnityPlayer.jar" and "Vuforia.jar" to be exact).
Now you have to find how its called the main script, to do this i looked in the manifest how it was called the Activity that wanted the actions MAIN and LAUNCHER, in the Mobile Social Plugin its called "AndroidNativeBridge" find that file in the proyect and open it.
Now you have to find how its named the other plugin activity that wants the actions MAIN and LAUNCHER, for example for vuforia its named "QCARPlayerNativeActivity", what you should do now is extend the file you opened before (AndroidNativeBridge) to the class "QCARPlayerNativeActivity" (before this the file should be extending "UnityPlayerActivity"), dont forget to add the "import com.qualcomm.QCARUnityPlayer.QCARPlayerNativeActivity;" to the import list to be detected correctly. (the complete name can be seen on the manifest)
Then Eclipse automatically recompiles itself, you can find the .jar in the folder "bin" inside the Eclipse proyect with the name "androidnative.jar" and this file its what you need to overwrite in your proyect, find the file inside unity and replace it with your new .jar.
Now all you need to do is adjust the manifest to include all the activities and permissions of both of the plugins, and in the secondary plugin activities delete the lines where tries to use the actions MAIN and LAUNCHER.
So basically what you need is the Eclipse proyect of one of the plugins and include the jar files of the other plugins, extend the main activity to them and adjust the manifest to include all the atcivities.
Here is the links that was given to me where i got the information about merging the two plugins.
I hope this helps, you can ask me if you have any question, im not an expert but i will try to help. :)
I used Android native plugin package for unity too. Your solution is very nice, but I still have some questions.
You mean put the second plugin in Android native project right? Its mean copy the package of second plugin in Android native project, and set the Active turn to extend AndroidNativeBridge?
If I built a plugin by my self, I have 3~4 class in one package, So if I fix code at Active class like this:
public class myPlugin extends Active -->public class myPlugin extends AndroidNativeBridge. And copy the whole package in the AndroidNative project, Is it correct?
By the way, How can I import other .jar in the peoject(eclipse), Because I can't find out the .jar in the file when I imported. Im sure the .jar is in the file.
I used AndroidNativePlugin for unity too, Your solution is very nice, but I still have some question.
You mean copy the second plugin's .jar in the AndroidNative project and change extend Active to extend AndroidNativeBridge, right?
If I built a plugin in eclipse, that have 3~4 class in a package.SO, I just copy the whole package in AndroidNative project, and fix the Active class like this:
public class myPlugin extend Active ---> public class myPlugin extend AndroidNativeBridge,right? Is that correct? Because I'm not familiar in eclipse, so I can't sure what I did is correct or not.
By the way, How can I import .jar in the eclipse project. I can't find out the .jar in the folder when I imported. I'm sure that the .jar is in the folder exactly.What is the problem. thank you :)
To add a .jar to eclipse you have to rigth click on the main proyect then "Build Path"/"Configure Build Path" then a window should appear, go to the tab "Libraries" then on the button "Add External JARs". This will make the other .jar part of your proyect so you can reference them on your code. I hope it helps :)
Hi, thank you for your solution. I extend Android native plugin to my other plugin, and I have a new Android native.jar , So I replace it to my Unity Project. And my other plugin.jar also have to put in my Unity Project?
Hello, I've followed each and every step of Currently using the "AndroidNativeBridge extends QCARPlayerNativeActivity" pattern, the .jar file seems to recompile correctly, but i cannot get both plugins to work together : the latest builds throws an
AndroidJavaException: java.lang.NoSuchMethodError: no method with name='getClass' signature='()Ljava/lang/Object;' in class Lcom/androidnative/AndroidNativeBridge;
I understand getClass is a top-level java method and cannot be overridden, and i'm no Android dev... If anyone got this working, I would be very grateful for pointers, or even an example androidmanifest file :) Thanks
Answer by liortal
·
Aug 06, 2014 at 05:17 PM
To answer this, i can generally say that there may be scenarios where multiple AndroidManifests.xml will be OK, and there will be times where it won't.
The only rule is that there's no general solution for making it work (if any, at all).
Your claim regarding multiple activities in a single manifest is wrong, as you can see in some of the attached files, where multiple elements are defined.
Unity claims to perform some merging of provided AndroidManifests (e.g: the main one under Plugins/Android will be merged with others that are placed in subfolders of Plugins/Android) - quoted from here:
The issue is that each plugin defines its own Activity that should be the LAUNCHER activity (the "entry point" to the app), and so multiple plugins will not be able to co-exist.
So, for this scenario, there's no easy solution.
I would try to do the following:
Double check to see if i really need that many plugins?
Verify with the plugin developer if there's a version or option to not use the manifest and their activity ?
Following #2, check if there's an option to manually initialize the plugin code, and if so, create my own custom activity that will do the initialization (skipping the need for the plugin to have its own activity).
Answer by unity_s_uRTmg4Kx61_A
·
Oct 03, 2017 at 10:11 AM
Hello everyone,
This may be coming late, but it can help other people out there to solve the issue of using two or more plugin in your unity android project. Most especially Vuforia and any other plugin. I had spend many days, weeks, and not able to combine the androidmanifest.xml of the two plugins. All of the credits go to this great guy (). He made a very detailed tutorial on how to use multiple plugin in your project. All you have to do is first read carefully, identify the exact files and edit the exact lines. The link to the tutorial is , you can also ask me any question and I can help you as much as I can.
Thanks make an Android plugin for unity?
0
Answers
Will an Android Manifest file named "AndroidManifest 1.xml" file work?
1
Answer
FileNotFoundException: Could not find file MyProject\Temp\StagingArea\AndroidManifest.xml
0
Answers
How to check if user has given camera or location permissions (android)
0
Answers
Android app may work in BG with Plugin?
1
Answer
|
https://answers.unity.com/questions/735848/how-to-merge-multiple-androidmanifestxml-files.html
|
CC-MAIN-2020-40
|
refinedweb
| 1,609
| 62.68
|
> rsort.zip > Rsort.cpp
////////////////////////////////// // Rapid Sort & Function Templates // By: *LuckY* // lucky760@yahoo.com ////////////////////////////////// // perhaps I over-commented #include
using namespace std; #include //for the strlen() function /* function prototypes */ //function template (discussed later) to display //the contents of any generic array by passing //the address of the first element and the number //of elements in the array. this is much better //than using a bunch of 'for' loops with 'cout's template void show(const T*,int); //pass the array and the number of elements to the //intermediary function. it will format the arguments //properly and pass them to the actual sort procedure template void r_sort(T*,int); //the actual sort procedure. it takes the array, the //index of its first element, and the index of its last //element template void do_r_sort(T*,int,int); int main() { //I'm hard-coding this array of char here, when in fact you //can use any generic array (ie: int,short,long,float,double) //because the function template allows you to do so char chr[] = "The quick brown fox jumped over the lazy dogs."; show(chr,strlen(chr)); //display the original array r_sort(chr,strlen(chr)); //do the sorting show(chr,strlen(chr)); //display the newly sorted array return 0; } template void r_sort(T *list,int count) { //pass the beginning values to the do_r_sort function //the first index is always 0 and the index of the last //index is 1 less than the element count, which is why //we have 'count-1' //if we didn't use this intermediary, we'd have to make //ugly function calls to the do_q_sort function from the //main program. we don't want that. do_r_sort(list,0,count-1); } template void show(const T *arr,int num) { //from the first to the last elements of the given array //we do a cout. I commented out the " " because we //are dealing with a string, so we don't want spaces //between each letter, but if it was a numeric array, we'd //want to leave it in. for (int i=0;i void do_r_sort(T *arr,int basel,int baser) { //type T is the type which was passed to the function //so in the case of this example, type T is 'char' //if you passed an array of double, T would be 'double' etc... T temp; int biggest,smallest; /* find the biggest element */ //this will start 'left' at the 'basel' index and 'right' at //the 'baser' value. if the value at index left is greater //than the one at right, we move index right to the left one //element and vice versa. we continue this until left and //right meet. In other words, we will keep moving the index //of the smaller value until both indexes meet. this will //assign the index to variable 'biggest' //(side note: there's no 3rd element in the 'for' loop // because we do the update of left and right in the body) for (int left=basel,right=baser;left = arr[right] ? --right : ++left; /* place biggest at end of array */ //here we copy the last value of the array to a temp variable //then we put the biggest value (of the entire array) into the //last (or rightmost) element because we are sorting for //left to right -- smallest to biggest //then we put the temp value into the slot where the 'biggest' //one just was. IOW, we swap the biggest value with the last //element's value temp = arr[baser]; arr[baser] = arr[biggest]; arr[biggest] = temp; /* find the smallest element */ //same as above except for the smallest element for (int left=basel,right=baser;left [return type] [function name] ([argument list]); This is used for the function prototype and the definition. The 'name of generic type' as specified on the template line MUST be used in the argument list at least once. Then when you use the variables of the passed type in your function, you use the same typename that you specified. So, if we said: template //etc.. we would do a: Blah x; to declare a variable of the same type that was passed. The function template is actually just a blueprint for the compiler. No code is actually created from the code of the template. Once you pass something to the function, lets say a 'double' value, the compiler will read the blueprint and create an instantiation, or the actual 'double' code that will do the procedure of the template as specified. */
|
http://read.pudn.com/downloads/sourcecode/math/2178/QUINCY99.15/mysource/Rsort.cpp__.htm
|
crawl-002
|
refinedweb
| 742
| 59.67
|
BlueBoss - Bluetooth Proximity Detection
- Posted: Jun 26, 2008 at 12:09PM
- 7,971 views
- 12 comments
![if gt IE 8]> <![endif]>
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements a device execute programs or alert you?
Andy Konkol –
Hardware:
Bluetooth radio (USB dongle)
SMA Female Jack
SMA male to N-male pigtail
2.4 GHz antenna (with N-female connector)
Software:
Robert Misiak's Notify Window
Download:
Bluetooth was designed for devices to communicate wirelessly over short distances. However, with a very simple hardware modification you can extend the range of your Bluetooth radio with standard 2.4ghz antennas used in wireless networking (802.11 a/b/g).
Modifying Bluetooth dongles to accept external antennas is documented all over the Internet. In principle it is very easy: find the antenna lead and solder on a connector/antenna. I purchased a very cheap Bluetooth USB dongle on eBay and opened the casing. After finding the antenna trace on the circuit board I soldered on a SMA Female connector to it. After soldering the antenna jack in place I slipped a 3 inch chunk of heatshrink and heated it to cover the exposed circuit board. Now I had a Bluetooth radio that accepts external antennas. Adding an antenna simply increases the range of your radio, allowing you to “see” devices from a farther distance.
To connect an external antenna to the dongle I needed to use a connector converter called a pigtail. I used a SMA male to N-male pigtail. I connected one end of the pigtail to my dongle and the other to an omni-directional 9dbi panel antenna that had an N-female connector.
To take advantage of my newly modified hardware I needed to download the Coding4Fun Toolkit. Included in the toolkit is an API for Bluetooth devices. This API allows you to do a wide variety of things with your Bluetooth radio but I focused on two methods from the ServiceAndDeviceDiscovery library: DiscoverAllDevices and DiscoverDeviceByName.
DiscoverAllDevices allows you to “scan” the airwaves on the 2.4 GHz band and report back what devices your radio sees.
DiscoverDeviceByName allows you to scan for a particular device with a specified name and report back if it is present or not.
private bool DevicePresent() { BluetoothDeviceServicesManager workerBTMgr = new BluetoothDeviceServicesManager(); Device workerDevice = workerBTMgr.DiscoverDeviceByName(_watchItem.DeviceName); return (workerDevice != null); } public void Run(String Operation) { switch (Operation) { case "SingleDevice": if (DevicePresent()) { _parentForm.Invoke(_parentForm.AddToDeviceSeenList, new object[] { _watchItem }); } else { _parentForm.Invoke(_parentForm.RemoveFromDeviceSeenList, new object[] { _watchItem }); } break; case "AllDevices": BluetoothDeviceServicesManager workerBTMgr = new BluetoothDeviceServicesManager(); List<Device> Devices = workerBTMgr.DiscoverAllDevices(); _parentForm.Invoke(_parentForm.ThreadUpdateDiscoverBox, new object[] { Devices }); break; } }
Both of these methods require that the device you are scanning for is in discoverable mode (which most manufacturers enable by default). Using the two methods described above I was able to tell if a device is in proximity. And ultimately enabling me create alerts and execute programs based on what device is present.
To perform device discovery and not have my UI lag I had to create two worker threads. One worker thread to discover all devices and display it under my devices listbox, and another to discover devices by name specified in the “watchlist.” I am not an expert at multi-threaded programs but I managed to implement them without any major headaches.
Since the “Add to watchlist” and “Edit” buttons essentially do the same thing, I decided to overload a windows form. I also wanted to keep track of the parent form and disable it when the WatchItemForm was shown.
The second overload allows me to fill in the form control's text based on the data that is already set for the WatchItem that has been selected (the Edit button).
public WatchItemForm(Form1 f, String DeviceName) { InitializeComponent(); this._parentForm = f; lblDeviceName.Text = DeviceName; } public WatchItemForm(Form1 f, WatchItem item) { InitializeComponent(); this._parentForm = f; lblDeviceName.Text = item.DeviceName; tbxPicturePath.Text = item.ImagePath; tbxProgramPath.Text = item.ProgramPath; this._parentForm.Enabled = true; }
I wanted to have a pop up notify window similar to outlook and was able to find Robert Misiak's NotifyWindow. This is a very simple library which allows you to create pop up notify windows very easily. I edited NotifyWindow to include a “picturepath” variable as well as picturebox on the form. As you can see creating a NotifyWindow is quite easy:
public void NotifyDeviceWindow(WatchItem x) { NotifyWindow nw; nw = new NotifyWindow(); //validate Alert Message if (x.AlertMessage == null) { nw.Text = x.DeviceName; } else { nw.Text = x.AlertMessage; } //validate picture if (x.ImagePath != null) { FileInfo imgfile = new FileInfo(x.ImagePath); if (imgfile.Exists) { nw.PicturePath = x.ImagePath; } else { MessageBox.Show("Image does not exist"); } } nw.Notify(); if (x.ProgramPath != null) { RunProcess(x.ProgramPath); } }
Process Flow/Software Andy, great article. A couple questions:
- Can it handle detection of multiple blue-tooth devices at a time?
- Does the 2.4GHz antenna screw with your existing 802.11 a/b/g WiFi network?
Hi Andy,
Great work!
In Chicago Marathon, last Sunday, contestants wore a chip on the shoe lace for race information transmittal. Were they using Bluetooth.
Thanks for info.
Regards.
can anyone send me code for bluetooth data acquisition btw two pcs
The device discovery of C4F devkit bluetooth has a virtual memory (VM) leak. Just watch what happens to the VM each time you search for devices.
Nice Work.
If there's one thing I'd suggest it would be to add a way to execute commands when the watched device disappears
@John, did you do a small scale test of this, is it just BlueBoss or the actual kit?
@abcd source code is at the top,
Is there a method to DiscoverDevice(), not Devices() at a very quick rate. For instance, I would like to return simply the address as soon as it appears then call the method again. I would like to scan the cars as they drive by my house.
check out the which Andy used to do this. An API in there may do what you want. I don't know if bluetooth itself will do what you're asking however.
Hello, I'm working on a project that requires me to extend the range of an USB dongle as much as possible. Have you any data on how much does a panel antenna like yours extend the signal?
Thanks!
Seb
@Jirshiri, the code is in c#. We don't have a VB.net port but what you can do is use Telerik code converter to switch the code over.
nice... but do you have the source code in VB ?
because i would like to do it also someting similar...
and no i dont know nothing about c++
Remove this comment
Remove this threadClose
|
https://channel9.msdn.com/coding4fun/articles/BlueBoss-Bluetooth-Proximity-Detection
|
CC-MAIN-2015-40
|
refinedweb
| 1,132
| 57.87
|
How do I execute a Java3D applet without installing Java3D
Hi guys!
What I'm trying to do is something that let me run an applet on any computer even the computer does not have Java3D installed.
I know that this can be done (I have seen examples into the internet), but I'm not able to do anything. I have seen solutions in the Internet... But I'm completely lost because it does not work for me.
Please, any kind of comment would be very very helpfull. I'm 100% lost
Thanks in advance...
You should not use or override the loadLibraryInternal method.
> Actually I want to know how I can make my applet working in all machines with and without java3d installed..
You are running into issue 534, "ClassNotFoundException when running applet if Java 3D installed into JRE", which is fixed in Java 3D 1.5.2. Have you tried aces' suggestion of pointing to:...
If so, did this work for you?
-- Kevin
Hi kevin,
I set... instead of as value to PARAM NAME="jnlpExtension1" in my applet. Its again giving me the same JNLPAppletLauncher ClassNotFoundException in java3d installed one..
Should I change my
Hello again aces!
I think I have the same question than rajisp... How do I launch the applet using JNLPAppletLauncher?!
Isn't enough by placing the applet tag into a html page?! Or maybe I have to code a method into the applet in which I have to launch the applet "by myself".
Thanks a lot!
So far as I know, there is no need to do anything special about you applets.
Just get the JNLPApplet tag as template, add your applet's jar file, set your applet class name on subapplet.classname param and your applet should work.
This is what I did. On some applets I don't even recompiled then. Just get the old jar and put then to run on JNLP.
Ok, I know I'm a little bit boring but... at least I have my applet running with the JNLPAppletLauncher...
The problem now is that if I have a computer without Java3D installed, it doesn't do anything.
Have you ever experienced something like that?!
Oooops, forgot to paste the error:
Java Plug-in 1.5.0_06
Usar versión JRE 1.5.0_06 Java HotSpot(TM) Client VM
Directorio local del usuario = C:\Documents and Settings\user
----------------------------------------------------
c: borrar ventana de consola
f: finalizar objetos en la cola de finalización
g: liberación de recursos
h: presentar este mensaje de ayuda
l: volcar lista del cargador de clases
m: imprimir sintaxis de memoria
o: activar registro
p: recargar configuración de proxy
q: ocultar consola
r: recargar configuración de norma
s: volcar propiedades del sistema y de despliegue
t: volcar lista de subprocesos
v: volcar pila de subprocesos
x: borrar antememoria del cargador de clases
0-5: establecer nivel de rastreo en
----------------------------------------------------
JNLPAppletLauncher: static initializer
os.name = windows xp
nativePrefix = nativeSuffix = .dll
tmpRootDir = C:\DOCUME~1\user~1\CONFIG~1\Temp\jnlp-applet\jln4742
Applet.init
subapplet.classname = package.subPackageName.Applet
subapplet.displayname = Running Java 3D Applet
Applet.start
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException
at sun.plugin.util.AnimationPanel.initLogoImage(Unknown Source)
at sun.plugin.util.AnimationPanel.doPaint(Unknown Source)
at sun.plugin.util.AnimationPanel.paint(Unknown Source)
at sun.awt.RepaintArea.paintComponent)
os.name = windows xp
os.arch = x86
Message was edited by: chamorrus
Well, I've done it!
It works (not in all the computers I have tested but it could be worse). I had problems with the package structure but it's all solved.
Thank you very much for the help supplied.
I close the thread.
Hi
I´ve fould a machine were my applets don´t work.
Using VM option
[b]
-verbose:class
[/b]
at Java Control Panel , I discover that JVM was getting Java3D jars from
C:\WINDOWS\Sun\Java\Deployment\Trusted
instead the downloaded JNLP ones.
It happens because it is a trusted folder for JRE, and has priority over folders placed elsewhere.
I deleted the jar files from that folder and all runs fine.
I hope it helps
Hi aces!
It's impossible! I can't run my applet with JNLPAppletLauncher even with the Java3D installed.
No progress bar, no applet, ... Nothing.
The place of the screen where the applet should appear stays gray.
The java console doesn't type anything.
The server doesn't put anything in the output log.
I don't know what I'm doing wrong. I have copied the code you posted few days ago and made my changes.
The only thing I do not do like you is in the tag
I tried including my applet.jar file (I mean, the cubemap.jar in your code), not including that file, ...
Could be the problem that I'm not creating correctly the jar file?! I do with the following sentence:
jar cvf myApplet.jar packageNameWhereAppletIsIncluded
Ok, I found my problem...
Where I'm trying to put my applet is not a HTML file... It's a XUL file and my tag where wrong. I have to puto something like
So I have one problem solved.
The problem I have right now is that the Java console shows the following error:
Applet.init
subapplet.classname = packageName.subPackageName.Applet
subapplet.displayname = Running Java 3D Applet
Applet.start
Exception in thread "AWT-EventQueue-4" java.lang.NoClassDefFoundError: package/ClassName
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Unknown Source)
at java.lang.Class.getDeclaredMethod(Unknown Source)
at java.awt.Component.isCoalesceEventsOverriden(Unknown Source)
at java.awt.Component.access$100(Unknown Source)
at java.awt.Component$2.run(Unknown Source)
at java.awt.Component$2.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.awt.Component.checkCoalescing(Unknown Source)
at java.awt.Component.
at java.awt.Container.
at java.awt.Panel.
at java.awt.Panel.
at java.applet.Applet.
at package.subPackageName.App.jdesktop.applet.util.JNLPAppletLauncher.startSubApplet(JNLPAppletLauncher.java:1889)
at org.jdesktop.applet.util.JNLPAppletLauncher.access$200(JNLPAppletLauncher.java:658)
at org.jdesktop.applet.util.JNLPAppletLauncher$5.run(JNLPAppletLauncher.java:1269): misc.PuntualData)
... 32 more
and I have included the package/ClassName into the jar file I type in the html:applet code tag...
What am I doing wrong right now?¿
Hello aces
with your applet specified i am getting JNLPAppletLauncher ClassNot Found Exception..
mine is java3d installed machine.
I tried the following code to load java3d libraries locally if it have java3d installed.
but its giving some exceptions..
Method
------------
[code]
private static void loadLibraryInternal(String libraryName) {
System.out.println("LOADING LIBRARY(" + library);
}
}
[/code]
Exception
-------------------
--------------- T H R E A D ---------------
Current thread (0x03361c00): JavaThread "J3D-Renderer-1" [_thread_in_native, id=2820]
siginfo: ExceptionCode=0xc0000005, writing address 0x0382e7dc
Registers:
EAX=0x00000076, EBX=0x000d6b76, ECX=0x03bbf7f0, EDX=0x0382e7dc
ESP=0x03bbf570, EBP=0x03bbf600, ESI=0x03bbf5b4, EDI=0x0382e701
EIP=0x0381c964, EFLAGS=0x00010202
Top of Stack: (sp=0x03bbf570)
0x03bbf570: 0381cc38 038c3cb8 0382e7dc 000d6b60
0x03bbf580: 00000000 00000000 00000000 00000000
0x03bbf590: 00000000 00000000 00000000 00000000
0x03bbf5a0: 00000000 00000000 00000000 00000000
0x03bbf5b0: 00000000 00000000 00000000 00000000
0x03bbf5c0: 00000000 00000000 00000000 00000000
0x03bbf5d0: 00000000 00000000 00000000 00000000
0x03bbf5e0: 00000000 00000000 00000000 00000000
Instructions: (pc=0x0381c964)
0x0381c954: 40 74 06 83 79 08 00 74 24 ff 49 04 78 0b 8b 11
0x0381c964: 88 02 ff 01 0f b6 c0 eb 0c 0f be c0 51 50 e8 d8
Stack: [0x03b70000,0x03bc0000), sp=0x03bbf570, free space=317k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [j3dcore-d3d.dll+0x5c964]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j javax.media.j3d.Win32NativeConfigTemplate3D.choosePixelFormat(JI[I[J)I+0
j javax.media.j3d.Win32NativeConfigTemplate3D.getBestConfiguration(Ljavax/media/j3d/GraphicsConfigTemplate3D;[Ljava/awt/GraphicsConfiguration;)Ljava/awt/GraphicsConfiguration;+116
j javax.media.j3d.NativePipeline.getBestConfiguration(Ljavax/media/j3d/GraphicsConfigTemplate3D;[Ljava/awt/GraphicsConfiguration;)Ljava/awt/GraphicsConfiguration;+5
j javax.media.j3d.Renderer.doWork(J)V+1466
j javax.media.j3d.J3dThread.run()V+19
v ~StubRoutines::call_stub
--------------- P R O C E S S ---------------
Java Threads: ( => current thread )
=>0x03361c00 JavaThread "J3D-Renderer-1" [_thread_in_native, id=2820]
0x03388400 JavaThread "J3D-MasterControl-1" [_thread_blocked, id=4048]
0x0333b400 JavaThread "J3D-NotificationThread" [_thread_blocked, id=3892]
0x03342800 JavaThread "J3D-TimerThread" [_thread_blocked, id=3928]
0x0310b400 JavaThread "J3D-RenderingAttributesStructureUpdateThread" [_thread_blocked, id=1528]
0x030e3400 JavaThread "AWT-EventQueue-1" [_thread_blocked, id=464]
0x030ed000 JavaThread "TimerQueue" daemon [_thread_blocked, id=2104]
0x00926400 JavaThread "DestroyJavaVM" [_thread_blocked, id=872]
0x030cf000 JavaThread "AWT-EventQueue-0" [_thread_blocked, id=2932]
0x030c1400 JavaThread "thread applet-main.MainClass.class" [_thread_blocked, id=2936]
0x02f28400 JavaThread "AWT-Windows" daemon [_thread_in_native, id=3276]
0x02f27800 JavaThread "AWT-Shutdown" [_thread_blocked, id=1440]
0x02f21800 JavaThread "Java2D Disposer" daemon [_thread_blocked, id=2424]
0x02bad000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=2596]
0x02ba8000 JavaThread "CompilerThread0" daemon [_thread_blocked, id=3988]
0x02ba7000 JavaThread "Attach Listener" daemon [_thread_blocked, id=2576]
0x02ba6400 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=1672]
0x02b9e400 JavaThread "Finalizer" daemon [_thread_blocked, id=4072]
0x02b9d400 JavaThread "Reference Handler" daemon [_thread_blocked, id=2136]
Other Threads:
0x02b94000 VMThread [id=1564]
0x02bae400 WatcherThread [id=3120]
VM state:not at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: None
Heap
def new generation total 960K, used 135K [0x22960000, 0x22a60000, 0x22e40000)
eden space 896K, 9% used [0x22960000, 0x22975c78, 0x22a40000)
from space 64K, 76% used [0x22a50000, 0x22a5c2d0, 0x22a60000)
to space 64K, 0% used [0x22a40000, 0x22a40000, 0x22a50000)
tenured generation total 4096K, used 1935K [0x22e40000, 0x23240000, 0x26960000)
the space 4096K, 47% used [0x22e40000, 0x23023fb8, 0x23024000, 0x23240000)
compacting perm gen total 12288K, used 2152K [0x26960000, 0x27560000, 0x2a960000)
the space 12288K, 17% used [0x26960000, 0x26b7a2e0, 0x26b7a400, 0x27560000)
ro space 8192K, 62% used [0x2a960000, 0x2ae5f728, 0x2ae5f800, 0x2b160000)
rw space 12288K, 52% used [0x2b160000, 0x2b7a1eb8, 0x2b7a2000, 0x2bd60000)
Dynamic libraries:
0x00400000 - 0x00423000 C:\Program Files\Java\jre1.6.0_02\bin\javaw.exe
0x7c900000 - 0x7c9b0000 C:\WINDOWS\system32\ntdll.dll
0x7c800000 - 0x7c8f4000 C:\WINDOWS\system32\kernel32.dll
0x6f000000 - 0x6f063000 C:\WINDOWS\SYSTEM32\SYSFER.DLL
0x5b860000 - 0x5b8b4000 C:\WINDOWS\system32\NETAPI32.dll
0x77c10000 - 0x77c68000 C:\WINDOWS\system32\msvcrt.dll
0x77dd0000 - 0x77e6b000 C:\WINDOWS\system32\ADVAPI32.dll
0x77e70000 - 0x77f01000 C:\WINDOWS\system32\RPCRT4.dll
0x77d40000 - 0x77dd0000 C:\WINDOWS\system32\USER32.dll
0x77f10000 - 0x77f56000 C:\WINDOWS\system32\GDI32.dll
0x7c340000 - 0x7c396000 C:\Program Files\Java\jre1.6.0_02\bin\msvcr71.dll
0x6d7c0000 - 0x6da09000 C:\Program Files\Java\jre1.6.0_02\bin\client\jvm.dll
0x76b40000 - 0x76b6d000 C:\WINDOWS\system32\WINMM.dll
0x6d310000 - 0x6d318000 C:\Program Files\Java\jre1.6.0_02\bin\hpi.dll
0x76bf0000 - 0x76bfb000 C:\WINDOWS\system32\PSAPI.DLL
0x6d770000 - 0x6d77c000 C:\Program Files\Java\jre1.6.0_02\bin\verify.dll
0x6d3b0000 - 0x6d3cf000 C:\Program Files\Java\jre1.6.0_02\bin\java.dll
0x6d7b0000 - 0x6d7bf000 C:\Program Files\Java\jre1.6.0_02\bin\zip.dll
0x6d000000 - 0x6d1c3000 C:\Program Files\Java\jre1.6.0_02\bin\awt.dll
0x73000000 - 0x73026000 C:\WINDOWS\system32\WINSPOOL.DRV
0x76390000 - 0x763ad000 C:\WINDOWS\system32\IMM32.dll
0x774e0000 - 0x7761c000 C:\WINDOWS\system32\ole32.dll
0x73760000 - 0x737a9000 C:\WINDOWS\system32\ddraw.dll
0x73bc0000 - 0x73bc6000 C:\WINDOWS\system32\DCIMAN32.dll
0x6d2b0000 - 0x6d303000 C:\Program Files\Java\jre1.6.0_02\bin\fontmanager.dll
0x6d570000 - 0x6d583000 C:\Program Files\Java\jre1.6.0_02\bin\net.dll
0x71ab0000 - 0x71ac7000 C:\WINDOWS\system32\WS2_32.dll
0x71aa0000 - 0x71aa8000 C:\WINDOWS\system32\WS2HELP.dll
0x6d590000 - 0x6d599000 C:\Program Files\Java\jre1.6.0_02\bin\nio.dll
0x7c9c0000 - 0x7d1d4000 C:\WINDOWS\system32\shell32.dll
0x77f60000 - 0x77fd6000 C:\WINDOWS\system32\SHLWAPI.dll
0x773d0000 - 0x774d2000 C:\WINDOWS\WinSxS\x86_Microsoft.Windows.Common-Controls_6595b64144ccf1df_6.0.2600.2180_x-ww_a84f1ff9\comctl32.dll
0x5d090000 - 0x5d127000 C:\WINDOWS\system32\comctl32.dll
0x10000000 - 0x1000e000 C:\WINDOWS\j3dcore-ogl-chk.dll
0x5ed00000 - 0x5edcc000 C:\WINDOWS\system32\OPENGL32.dll
0x68b20000 - 0x68b40000 C:\WINDOWS\system32\GLU32.dll
0x037c0000 - 0x03893000 C:\WINDOWS\j3dcore-d3d.dll
0x4fdd0000 - 0x4ff76000 C:\WINDOWS\system32\d3d9.dll
0x038a0000 - 0x038a6000 C:\WINDOWS\system32\d3d8thk.dll
0x77c00000 - 0x77c08000 C:\WINDOWS\system32\VERSION.dll
0x74c80000 - 0x74cac000 C:\WINDOWS\system32\OLEACC.dll
0x76080000 - 0x760e5000 C:\WINDOWS\system32\MSVCP60.dll
0x77120000 - 0x771ac000 C:\WINDOWS\system32\OLEAUT32.dll
0x038d0000 - 0x038f9000 C:\WINDOWS\j3dcore-ogl.dll
0x69000000 - 0x6940b000 C:\WINDOWS\system32\sisgl770.dll
0x6d3e0000 - 0x6d3e6000 C:\Program Files\Java\jre1.6.0_02\bin\jawt.dll
VM Arguments:
jvm_args: -Djava.security.policy=java.policy.applet
java_command: sun.applet.AppletViewer main.MainClass1210575626484.html
Launcher Type: SUN_STANDARD
Environment Variables:
PATH=C:\Program Files\Java\jre1.6.0_02\bin\client;C:\Program Files\Java\jre1.6.0_02\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem
USERNAME=raji
OS=Windows_NT
PROCESSOR_IDENTIFIER=x86 Family 6 Model 15 Stepping 2, GenuineIntel
--------------- S Y S T E M ---------------
OS: Windows XP Build 2600 Service Pack 2
CPU:total 2 (2 cores per cpu, 1 threads per core) family 6 model 15 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3
Memory: 4k page, physical 2030892k(1246748k free), swap 3969668k(3209776k free)
vm_info: Java HotSpot(TM) Client VM (1.6.0_02-b06) for windows-x86, built on Jul 12 2007 01:16:14 by "java_re" with unknown MS VC++:1310
>[j3dcore-d3d.dll+0x5c964]
The crash problem you are reporting is not related to JNLP/applet.
This is about supporting legacy video cards like SiS , using D3D pipeline.
This issue is fixed for Java3D 1.5.2 and higher.
Also make sure you have updated DirectX drivers from
Added later :
You can try fix right now by using...
replacing
Message was edited by: aces
Hi aces,
thanks for your reply..
Actually aces,
> thanks for your reply..
> ctually Raji
Let me know below link works for you in both cases (with/without Java3D installed) :...
It does work on some systems I tested so far, with/without java3D installed.
> Let me know below link works for you in both cases
> (with/without Java3D installed) :
>
> cubemap/CubeMapDemo.html
Hi,
for me its not working with java3d installed one..
>
>
> It does work on some systems I tested so far,
> with/without java3D installed.
My applet code already containing this code.with that i can load my applet only without java3d installed machine..
Message was edited by: rajisp
Hi Dmitri!
I have spent the whole day trying to "execute" the applet-launcher and I have not been able to do that...
I try to put the code shown in the examples and it doesn't works. I mean, I copy the code below
width=400
height=300
and change the name of the subapplet.classname putting my own applet classname... but it doesn't do anything.
Do I have to create a .jnlp file or something like that?!
Anyway there might be another solution. I mean something like the applet shown here.
If you test it onto a computer without java3d installed... It works and you dont have to accept a certificate and nothing like that...
I don't know if you understand what I mean...
> Do I have to create a .jnlp file or something like that?!
>
Sorry, I can not help you with this one. You have to read the doc.
> Anyway there might be another solution. I mean something like the applet shown here
>.
> If you test it onto a computer without java3d installed... It works and you dont have to accept a certificate and nothing like that...
>
It seems like this applet is not based on java3d.
---------------------------------------------------------------------
To unsubscribe, e-mail: interest-unsubscribe@java3d.dev.java.net
For additional commands, e-mail: interest-help@java3d.dev.java.net
> It seems like this applet is not based on java3d.
Sorry then Dmitri, I didn't notified that. Do you mean that the only way of doing what I want to do is using JNLPAppletLauncher?!
Is the applet shown below an applet based on java3d!?
If so, that's what I want to do. If it's not an applet based on java3d, what kind of technology are they using there?
Regards
On 09.05.2008 09:15, java3d-interest@javadesktop.org wrote:
>> It seems like this applet is not based on java3d.
>>
> Sorry then Dmitri, I didn't notified that. Do you mean that the only way of doing what I want to do is using JNLPAppletLauncher?!
>
Yes. If you need an _java3d_ applet running on a computer without java3d
you have to use JNLPAppletLauncher. Or you can detect the absence of
java3d and ask an user to install it manually.
> Is the applet shown below an applet based on java3d!?
>
>
Nop :)
> If so, that's what I want to do. If it's not an applet based on java3d, what kind of technology are they using there?
>
They use their own pure-Java renderning engine. In most of the cases it
does not utilize hardware capabilities of a videocard (no shading, no
antialiasing etc.).
Take a look at this (upper right corner):
DD
[att1.html]
I have some Java3D applets in my web page and they work fine for both cases : with and without Java3D installed.
My applet tag looks like this :
[code]
[/code]
Let me know if you have problems to run above applet
Hi aces!
Thaks for your reply... I'll test your code next Monday (now I can't test it).
What you have is what I need to do... and watching the code you posted, I would like to ask you some questions.
1.- You wrote "
The file next to the tag archive, is the applet you are showing into a jar file, or just a class (or classes) you need to use?
2.- "
" Is RunJ3DemoCubeMap the name of the applet class?!
3.- Do I have to code any jnlpExtension class or are they predefined yet?!
On the other hand, once I've adapted the code to my project, do I have to add the applet-launcher.jar library to the directory /lib of my project?
Do I have to do any other special action in addition of addapting the code you posted?! (I mean wether I have to code something in any other class, or copy any other file or library...)
Thanks a lot !
Hi
Sorry to reply it out of order.
Please see my previous post about that crash.
It will be fixed as soon Sun put Java3D 1.5.2 as the [i] latest [/i] libs for JNLP, but you can try it right now by using...
replacing
About your questions:
[i]
>1.- You wrote " >The file next to the tag archive, is the applet you are showing into a jar file, or just a class (or classes) you need to use?
[/i]
This is the JNLP applet launcher itself. This is the wrapper used to launch our Java3D applets
[i]
>2.- "
" Is RunJ3DemoCubeMap the name of the applet class?!
[/i]
Yes ;)
[i]
>3.- Do I have to code any jnlpExtension class or are they predefined yet?!
[/i]
The jnlp extensions are all predefined. See links below:
Several extensions exposed here :
Java3D
Hello again Aces!
First of all, thank you very much for your reply.
Moreover I would like to know if you mean by the reply of the question #1, that I have to code the JNLP applet launcher. If so that's the reason why my code won't work.
I assume that you are describing the file "cubemap.jar", the one I asked for. If you weren't describing it, please, let me know what is the purpose of this file.
One more time, thank you very much
Hi
The file cubemap.jar is my applet jar file. It contains my compiled classes, images, as well my Java3D applet class, i.e., that one named on subapplet.classname parameter
cube.RunJ3DemoCubeMap
[code][/code]
So you just have to develop a applet as usual, put the compiled class and resources packed in a jar file and use the JNLP applet launcher.
It is a bit buggy, but works fine if an user does not have java3d installed.
---------------------------------------------------------------------
To unsubscribe, e-mail: interest-unsubscribe@java3d.dev.java.net
For additional commands, e-mail: interest-help@java3d.dev.java.net
I know this is an old post but if you're still looking for a solution or if anyone else runs into this problem of Java3d in applets I ran into the same problems that you are having so I developed my own solution. Go to people.bu.edu/rrusso1/java3d.html and download the jar file. Inside the jar file are the native library files for 32 and 64 bit versions of windows and linux (Max OS X has Java3d natively installed). Simply add this jar to your archive list in your applet and call 'new Java3dLinker()' and it will determine the the OS you have and extract the proper library fiels and add them to your path so when Java3d goes to load the library it will correctly find it in the path. let me know if there are any issues anyone has and if this works!
|
https://www.java.net/node/678003
|
CC-MAIN-2015-11
|
refinedweb
| 3,445
| 60.41
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.