Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
There are a few implementations of tree views for WinRT available, but none of them did exactly what I wanted. I wanted something very simple (300 lines of code) that I could tailor the look and feel easily without getting bogged down in generic hierarchical data templates etc. I’ve written 2 or 3 tree views but the most recent one came out pretty well so I thought I would share.
The basic idea is to use a ListView and only display the expanded items, and indent them depending on their level in the hierarchy. By using a ListView, we get all the rich templating and mouse over behaviours plus the entrance transitions etc.
A couple of caveats with this implementation. Rather than go for a generic solution, the tree view has knowledge of the items it is displaying (Item and ItemFolder in the sample). So if you use it, you will have to modify it to understand your data types.
Secondly, it stores the expanded state of the nodes in the data model, which means you can’t have two tree views operating on the same data otherwise they will expand and collapse together as they share the same data.
I took the hit on these two things for simplicity’s sake, but the approach will not appeal to the purists.
The tree view assumes it has a collection of Item’s and ItemFolder’s as its ItemSource, and that an ItemFolder has a Children property containing another similar list.
I chose to use files and folders for the sample, but clearly you could display any tree structure.
I’ll not explain the control in any great detail as it would take up too much space, but it is worth pointing out one rather cool method in ItemFolder – the AllChildren property. I found when working with trees of data I was constantly writing recursive functions as these are the easiest way to process trees, but it started getting quite tiresome, so I came up with this neat little property:
public IEnumerable AllChildren
foreach (Item child in this.Children)
yield return child;
ItemFolder folder = child as ItemFolder;
if (folder != null && folder.Children.Count > 0)
foreach (Item grandChild in folder.AllChildren)
yield return grandChild;
This hides the recursion in the AllChildren property and allows you to process the tree as a flat list with a foreach statement.
foreach(var child in parent.AllChildren)
Its the first time I’ve use yield and it really helps when working with trees.
Note that the tree view watches all its children, so if you add something into one of the Children collections, it will automatically update.
Hope you like it, here is the sample code. You are free to use it in your code, even for commercial use.
Paul Tallett, UX Global Practice, Microsoft UK
|
OPCFW_CODE
|
November 23, 2021: Added
gfsa_has_accessfilter for filtering whether the current viewer has access to a given post.
March 2, 2021: Updated snippet to run as a Singleton plugin. Added
June 21, 2017: Improved support for automatically showing required form if no required message is specified.
March 2, 2017: Updated submitted forms cookie to be persistent by default. Added new "is_persistent" option to disable this.
March 1, 2017: Added support for "gwsa_requires_submission_redirect" option to allow automatically redirecting to a specific page if the user requires access.
March 23, 2016: Added support for requiring a form to be submitted before any page can be accessed. Added support for storing submitted forms in user meta.
March 3, 2015: Added support for shortcodes in "gwsa_requires_submission_message" custom field. Fixed issue where json_decode() did not return an array.
You have a post or page you’d like to protect but you don’t want to require the user to sign up for a user account and you just don’t need a full-blown membership system. All you want to do is collect a few details about the user for your mailing list or CRM.
This plugin provides an easy way to accomplish this. Any post-based content (that includes pages and custom post types) that support custom fields can be locked down. You set a few special custom fields and the Gravity Forms Submit to Access plugin takes care of the rest.
Install the plugin
- Click the “Download Code” button above and save the file to your Desktop.
- Drop the file into your WordPress plugins folder via FTP – or – zip the file up and upload it via WordPress plugin uploader.
Configure the plugin
- Read on for step-by-step instructions
Locking Down a Page
Navigate to the Edit screen for any post, page, or any custom post type.
Enable “Custom Fields via the Screen Options at the top of the page. There is a good chance they are already enabled.
Add a custom field named gwsa_require_submission with a value of
Add a custom field named gwsa_form_ids and set the value to the ID of whichever form the user should submit to gain access to this page.
Custom Field Options
gwsa_require_submission (string) (required)
Add this custom field with a value of
1to require a Gravity Form to be submitted to gain access. Set this value to
per_pageto require the submission on the page itself.
gwsa_form_ids (bool) (optional)
Add this custom field and set the value to the ID of the form which must be submitted to gain access to this page. If there are multiple forms that can be submitted to gain access, you may include them as a comma-delimited list (i.e.
1,2,3). If any form can be submitted to gain access to this page, do not add this custom field option.
gwsa_requires_submission_message (string) (optional)
Override the default message that is displayed when the user does not have access to view the content of this page.
gwsa_requires_submission_redirect (string) (optional)
Provide a URL to which the user will be redirected if they do not have access to view the content of this page.
Per Page Locking
By default, any pages locked by the same form submission will all be unlocked simultaneously. For example, let’s say you lock down two pages on your site, called Welcome New Members and House Rules, and you only want members of your site to be able to view those pages. You can use the same form to unlock both pages by inserting the same Form ID into the gwsa_form_ids field on both pages. When a user submits the form on either of the two pages, both pages are unlocked at the same time.
In some cases, you only want to unlock a page when a form is submitted on that page. In the case of the examples pages above, you might have a simple Terms of Service form that requires the user check a box indicating they have read the terms before they can view the content. If you set the gwsa_require_submission custom field value to
per_page instead of
1, the page content is only unlocked when the user submits the form on that specific page.
requires_submission_message (string) (optional)
Define the default message that is displayed if the user does not have access to the content. This value will be overridden if a post-specific message is set via the gwsa_requires_submission_message custom field option. Defaults to
'Oops! You do not have access to this page.'.
bypass_cache (bool) (optional)
Enabling this option will allow the script to bypass any page/cookie caching by fetching the post content via AJAX. Defaults to
loading_message (array) (optional)
If bypass_cache is enabled, this option allows you to control the loading message which is visible while the post content is being fetched via AJAX.
is_persistent (bool) (optional)
The cookie that stores which forms have been submitted for the visitor is persistent by default. Set this to
falseto make the cookie session-based.
enable_user_meta (bool) (optional)
Set this to
trueto save submitted forms in the user meta rather than a cookie. Only works for logged-in users.
This is a bare bones plugin. It uses WordPress’ custom fields UI to handle setting the options and advanced configuration should happen in the plugin.
If this proves to be a popular resource, I’ll be happy to enhance it to be even easier to use.
|
OPCFW_CODE
|
$35.00 dollars a year is hardly a waste of money IMO.
There are these things called jobs. You can earn money through these "jobs", and with this "job money" you can buy things and stuff, like Xbox memberships. And if one is smart about using this "job money" they can pay less for things than they usually are.
OK. Now, don't anyone get their panties in a bunch, as this is MY personal opinion. At one time, I really enjoyed my Gold membership. "Back in the day", when gamers had fun, socialized through the game with each other, etc. Now, I see a lot of verbal "abuse",
cheating, people quitting during the game, etc.
Recently used a 14 day Gold code (from Halo 4), and thought, "ok, I'll play on-line for these 14 days and see how it is." Sad to say, I just can't justify the Gold membership. It's not the money, because I think the price could be well worth the "fun"
time one could have on Xbox Live.
My issues are: When you avoid someone (and I avoid A LOT of players), don't match me up with them ANYMORE! I may play 6-8 hours throughout the day. I don't want to play with these people later during the day, nor anytime "down the road".
Make the games/servers where NO ONE can cheat! Surely programmers today can fore-see, anticipate, players cheating and take action BEFORE people can cheat. I'm talking about modded avatars, modded gamerscore, obnoxious/offensive gamertags, lag switches,
lagging in general, exploiting "glitches", etc.
I have seen so many "offensive" gamertags lately. How do they make it through the "filters"? I was playing with 2 people a few days ago with VERY offensive tags, got hold of Xbox enforcement through on-line chat, and voiced my opinion. I was VERY
pleased with this contact, and the results that followed. ( a tag that expresses "illegal" *** with children, should NEVER have been able to make it through the filtering system).
I am a VERY competitive player. I love to win and strive to be the best I can be. I also love to have fun (as many of us have/did in the past). But things seem so un-balanced anymore. I abide by the COC and TOU. The only time I use a mic is when I
notice my team talking/strategizing towards working the objective of the game. I have no fun when someone is cheating, ruining my game experience in some im-mature way (singing, cursing, trash talking, etc.), or any other "dirty" play.
Just my personal opinion........... (I just noticed a three letter word that the "filter" wouldn't let me use! Why can't the filtering system work when it comes to gamertags?).
|
OPCFW_CODE
|
I was wondering… in Windows one can mount a WebDAV source as a network drive. This would allow to have only rclone running as a service and eliminate the need for Winfsp.
Are there any advantages in using the mount command, compared to using serve with WebDAV?
I think there are pros and cons. The webdav protocol isn't very full featured so doesn't support modification times - that is probably the biggest thing you'll notice.
WinFSP is super reliable but it might be the Windows WebDAV network drive code is even more reliable - I don't know!
Give it a go and tell us how it went
Modification times are not interesting in this use case, as this would just be a "media machine". Playing back stuff on the cloud.
WinFSP has been good for me, for a long time. But recently I updated it to the 2022 release and I started having problems. In seemingly random way, from time to time, transfer rates were capped at 14-18Mbps. Which was leading to stutter in playback (that's how I noticed). I first reverted to rclone 1.57 from 1.58.1 but the problem did not go away. Now it's been two days on WinFSP 2019 and I've never had the problem happen.
Clearly not very scientific but I did not know how else to solve this. Debug logs for rclone were showing exactly zero problems during stuttering moments. And I am sure it wasn't my ISP throttling me because often it went away by simply seeking ahead (thus requesting different chunks).
As such I was considering taking away the "middle man" (WinFSP), leading to a more streamlined configuration.
Although the current one seems good (WinFSP 2019 and rclone 1.57). I'll probably try to update again 1.58.1 and see if it stays ok.
It's worth an experiment with WebDAV.
I don't know how the windows WebDAV client works but it is probably in a similar way to winfsp so you are swapping one middle man for another.
Btw how do you set it up? O keep meaning to give it a try. I know very little about windows though!
I did rclone serve WebDAV, specified the port and then mounted the WebDAV as a disk, using the map network disk native windows functionality.
Everything worked BUT for reasons I wasn’t able to understand, movies were not openable. PDFs on Google Drive opened with no problem, pictures no problem, music flac files no problem.
MKVs did not.
Went back to mount, will probably revisit in the future.
This is the log with the video files I tried to open: rclone WebDAV - Pastebin.com (edited for clarity)
I am on macOS so it isn't going to be identical but I've found that mounting (on macOS with FUSE) is better if you don't mess it up. What I mean by that is you only stop the mount (whether active or
--daemon) it is fine. But if you CTRL+C a FUSE mount, you are in for a bad time.
On the other hand, WebDAV can only be stopped by the rclone process. Ejecting doesn't do anything.
Cryptomator offers both and they say that mount is better than WebDAV except that WebDAV is more compatible. I'd say the same is true for rclone. In general, I only use WebDAV over mount when I can't (or don't want to) install FUSE
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
use web_sys::{WebGl2RenderingContext, WebGlVertexArrayObject};
use crate::graphics::GraphicContext;
#[derive(Clone)]
pub struct RenderModel {
pub vao: WebGlVertexArrayObject,
pub triangle_count: u32,
}
impl RenderModel {
pub fn new(ctx: &GraphicContext, vertices: &[f32]) -> RenderModel {
let vao: WebGlVertexArrayObject = ctx.gl.create_vertex_array().expect("failed to create VAO");
ctx.gl.bind_vertex_array(Some(&vao));
let buffer = ctx.gl.create_buffer().expect("failed to create buffer");
ctx.gl.bind_buffer(WebGl2RenderingContext::ARRAY_BUFFER, Some(&buffer));
// Note that `Float32Array::view` is somewhat dangerous (hence the
// `unsafe`!). This is creating a raw view into our module's
// `WebAssembly.Memory` buffer, but if we allocate more pages for ourself
// (aka do a memory allocation in Rust) it'll cause the buffer to change,
// causing the `Float32Array` to be invalid.
//
// As a result, after `Float32Array::view` we have to be very careful not to
// do any memory allocations before it's dropped.
unsafe {
let vert_array = js_sys::Float32Array::view(vertices);
ctx.gl.buffer_data_with_array_buffer_view(
WebGl2RenderingContext::ARRAY_BUFFER,
&vert_array,
WebGl2RenderingContext::STATIC_DRAW,
);
}
ctx.gl.enable_vertex_attrib_array(ctx.position_loc as u32);
ctx.gl.vertex_attrib_pointer_with_i32(0, 3, WebGl2RenderingContext::FLOAT, false, 0, 0);
RenderModel {
vao,
triangle_count: (vertices.len() / 3) as u32
}
}
}
|
STACK_EDU
|
Multiple Sources, One Console
Create a Baseline
Many organisations have a wealth of data about their estate stored in different formats across a variety of systems. Although each silo will contain a snippet of information relevant to that system, delivering a new project or change to an existing service will require a broader picture of the estate to fully understand the current landscape and how that relates to the target environment. Manually bringing this together, tracking so many moving parts, maintaining the relationships and dependencies between the objects, and ensuring the most current information is shared with the wider team is a significant undertaking that consumes substantial resource and often introduces error.
ManagementStudio overcomes these challenges by directly integrating with existing systems, such as Microsoft Active Directory (AD) and Microsoft End Point Configuration Manager (MECM), to create a rich profile of users and assets within the business. Data points from each of the source systems are layered to create a contextual view of what users do, their place in the organisation, and how assets are utilised. And what’s more, ManagementStudio tracks changes to all objects as they happen to ensure everyone using the tool has current and relevant insight into the estate.
Supported DatA Sources
ManagementStudio supports multiple methods of importing data, supplementing automated tools with manual input to create a blended view of the estate. This provides flexibility to determine the most appropriate method of harvesting data:
- Connectors: Programmatically import data directly from supported inventory systems:
- Microsoft Active Directory (AD)
- Microsoft Endpoint Configuration Manager (MECM)
- Lakeside SysTrack
- Manual bulk data import
- Dynamic Forms
Understanding the current environment before any planning takes place is crucial to a successful deployment. ManagementStudio uses Connectors to routinely talk directly to the source database, such as AD and MECM, to create a baseline of the estate as it is today and keep this view up-to-date and relevant. Connecting to existing data sources means that a wide range of detailed history is immediately available to ManagementStudio, and they provide incredibly rich inventory and usage data without installing an agent on client devices.
The Connector is responsible for importing information about individual objects, defining the relationship between different objects location and place within the business, and tracks any changes that occur. Using a Connector over other methods of data import has the following benefits:
- The import is automatic and is scheduled to update twice a day to ensure that any changes to staff or assets are captured
- Filters can be used to ignore objects that meet a defined criterion, for example, excluding service accounts in an AD import or only importing information about devices with a specific OS-type in a MECM
- The fields that are included in the import are easily extended in the ManagementStudio Administration section, enabling the Connector to collect custom information.
- If an object is deleted or disabled in the source data, the Connector will automatically archive the corresponding record in ManagementStudio
- Automatically create and maintain relationships with objects from multiple data sources, for example, linking an AD user to a device in MECM and corresponding mailbox in Microsoft Exchange
Without Connectors, it would be incredibly difficult and time consuming to manually create a daily view of the estate to the same level of accuracy and detail.
Manual Data Imports
In addition to automated data imports using Connectors, ManagementStudio also supports ad hoc manual bulk data imports from Excel Worksheet or a Comma Separated Values (CSV) file. This is ideal for data that changes in frequently or when a one-off import is required. For example, an HR extract is used to create the organisational structure and update employee locations within ManagementStudio.
ManagementStudio simplifies the import by:
- Automatically map the column headers to known fields in ManagementStudio where the names match
- Match records to corresponding ManagementStudio entries using a variety of unique identifiers
- Create new records and update existing entries
- Automatically move records to a particular process and Blueprint
- Enable the mapping configuration and import options to be saved for future use
In addition, access to the import tool is controlled by User Role Groups, which gives ManagementStudio administrators the ability to only import data for specific modules. For example, the application team would only have the option to import data for the Applications module.
Not all data that is used in ManagementStudio is taken from existing business systems. Activities that are part of the workflow will also generate large amounts of information that will need to be centrally stored, for example:
- Details required for packaging such as location of the installation media and instructions, pre-requisites, application behaviour and post-installation activities
- A user’s hardware and software requirements
- Which users should have access to a particular application
- License and support agreements that are in place for specific assets
- Health and Safety information for each site or location
ManagementStudio’s Discovery feature allows the organisation to easily add new tabs and questions to ensure that all information relevant to that record is captured.
Discovery is supported in each of ManagementStudio’s core modules.
End-users are often overlooked as a potential source of information, but input from your user base can positively impact the success of a project or change. Consider the following scenarios that most organisations will have encountered:
- Staff are required to work from home at short notice and laptops are to be shipped to their address
- An engineer will be sent to a colleague’s office to assist with a Windows upgrade
If the end-user doesn’t validate the information held about them in HR or Active Directory, the laptop might be sent to the wrong address, or the engineer attends the wrong office. Not only does this add unnecessary delay and cost to the workstream, but it also results in lost productivity and dissatisfaction for the end-user – ultimately creating negative perception towards the change.
To overcome this challenge, we created Dynamic Forms; an easy way for end-users to review the data held about them, and for IT to collect more information:
- Dynamic Forms are hosted on the ManagementStudio web portal and are accessed using any modern web browser
- A link to the form is sent to users by email, with follow-up email automatically sent if no response is received
- The forms support conditional fields giving the ability to display additional questions based on a response
- Dynamic Forms support HTML and Markdown out of the box, providing the ability to create rich, beautifully presented surveys
- Additional actions can be triggered when a form is submitted, for example, a user is added to the Windows 11 AD group once they have agreed to the IT acceptable usage policy
|
OPCFW_CODE
|
from numba import njit, prange
from numba import jit
import numpy as np
def normalize(x: np.ndarray):
return x / np.sqrt((x ** 2).sum())
@njit(fastmath=True)
def rotation(alpha):
c, s = np.cos(alpha), np.sin(alpha)
return np.array(((c, -s), (s, c)))
@njit(fastmath=True)
def row_norm(matrix: np.ndarray):
return np.sqrt(np.sum(matrix ** 2, axis=1))
@njit(fastmath=True)
def is_point_left(a, b, c):
"""computes if c is left of the line ab.
:return: True if c is left of the line ab and False otherwise.
"""
return (b[0] - a[0]) * (c[1] - a[1]) - (b[1] - a[1]) * (c[0] - a[0]) > 0
@njit(fastmath=True)
def get_local_poses(poses, relative_to):
local_poses = poses - relative_to
R = rotation(relative_to[2])
local_poses[:, :2] = local_poses[:, :2].dot(R)
return local_poses
@njit(fastmath=True)
def transform_sin_cos(radians):
rads = np.atleast_2d(radians).reshape(-1, 1)
return np.concatenate((np.sin(rads), np.cos(rads)), axis=-1)
@njit(fastmath=True)
def polar_coordinates(points):
# compute polar coordinates
pts = np.atleast_2d(points)
dist = row_norm(pts)
phi = np.arctan2(pts[:, 1], pts[:, 0])
return dist, phi
def ray_casting_walls(fish_pose, world_bounds, ray_orientations, diagonal_length):
return 1 - np.nanmin(_ray_casting_walls(fish_pose, np.asarray(world_bounds), ray_orientations), axis=1) / \
diagonal_length
# TODO: Check if Memory Leak still occurs after issue #4093 of Numba was solved
# @jit(nopython=True, parallel=False)
def _ray_casting_walls(fish_pose, world_bounds, ray_orientations):
# assert len(fish_pose) in [3, 4], 'expecting 3- or 4-dimensional vector for fish_pose'
fish_position = fish_pose[:2]
if len(fish_pose) == 3:
fish_orientation = fish_pose[2]
else:
fish_orientation = np.arctan2(fish_pose[3], fish_pose[2])
ray_orientations = ray_orientations.reshape((-1, 1))
ray_orientations = ray_orientations + fish_orientation
# world_bounds = np.asarray(world_bounds)
ray_sin = np.sin(ray_orientations)
ray_cos = np.cos(ray_orientations)
ray_a = ray_sin
ray_b = -ray_cos
ray_c = np.zeros_like(ray_orientations)
ray_lines = np.concatenate((ray_a, ray_b, ray_c), axis=1)
# compute homogeneous coordinates of walls
walls_a = np.array([1., .0, 1., .0]).reshape((-1, 1))
walls_b = np.array([.0, 1., .0, 1.]).reshape((-1, 1))
walls_c = -(world_bounds - fish_position).reshape((-1, 1))
wall_lines = np.concatenate((walls_a, walls_b, walls_c), axis=1)
# intersections of all rays with the walls
# TODO: allocating 1D array and reshaping afterwards is a workaround for issue #4093
intersections = np.empty((len(ray_orientations) * 4)).reshape((-1, 4))
intersections[:] = np.NaN
indices = [np.asarray(ray_cos < .0).nonzero()[0],
np.asarray(ray_sin < .0).nonzero()[0],
np.asarray(ray_cos > .0).nonzero()[0],
np.asarray(ray_sin > .0).nonzero()[0]]
# for i, wall, inz in enumerate(zip(wall_lines, indices)):
for i in prange(4):
if len(indices[i]) > 0:
xs = compute_line_line_intersection(ray_lines[indices[i]], wall_lines[i])
intersections[indices[i], i] = np.sqrt(np.sum(xs**2, axis=1))
return intersections
@njit(fastmath=True)
def compute_line_line_intersection(line1: np.ndarray, line2: np.ndarray):
# check that the lines are given as 2d-arrays and convert if necessary
line1 = np.atleast_2d(line1)
line2 = np.atleast_2d(line2)
# check if lines are given as 3d homogeneous coordinates and that we have either 1-n, n-1, n-n (element wise)
assert line1.shape[1] == 3
assert line2.shape[1] == 3
assert line1.shape[0] == line2.shape[0] or line1.shape[0] == 1 or line2.shape[0] == 1
# compute the last coordinate of the intersections
c = (line1[:, 0] * line2[:, 1] - line1[:, 1] * line2[:, 0]).reshape(-1, 1)
# if this coordinate is 0 then there is no intersection
inz = c.nonzero()[0]
r = np.empty(shape=(len(c), 2))
r[:] = np.NaN
i1 = np.array([0])
i2 = np.array([0])
if line1.shape[0] > 1:
i1 = inz
if line2.shape[0] > 1:
i2 = inz
# compute coordinates of intersections
a = line1[i1, 1] * line2[i2, 2] - line2[i2, 1] * line1[i1, 2]
b = line2[i2, 0] * line1[i1, 2] - line1[i1, 0] * line2[i2, 2]
r[inz, :] = np.concatenate((a.reshape((-1, 1)), b.reshape((-1, 1))), axis=1) / c[inz]
return r
@jit(nopython=True, parallel=False)
def compute_dist_bins(relative_to, poses, bin_boundaries, max_dist):
c, s = np.cos(relative_to[2]), np.sin(relative_to[2])
rot = np.array(((c, -s), (s, c)))
local_positions = (poses[:, :2] - relative_to[:2]).dot(rot)
# compute polar coordinates
dist, phi = polar_coordinates(local_positions)
dist = np.minimum(dist, max_dist) / max_dist
dist_array = np.ones(len(bin_boundaries) - 1)
for i in range(len(poses)):
for j in range(len(bin_boundaries) - 1):
if bin_boundaries[j] <= phi[i] < bin_boundaries[j+1]:
if dist[i] < dist_array[j]:
dist_array[j] = dist[i]
break
return 1 - dist_array
def sigmoid(x, shrink):
return 2. / (1 + np.exp(-x * shrink)) - 1.
|
STACK_EDU
|
Nokia Maps is a giant step backwards
My relevant background - I live in Los Angeles, owned an HTC HD7S for more than a year and just recently got a Lumia 920. I like everything about my new 920 except vibration noise, and this horrid Nokia Maps app which is forced on us...
Problems with Nokia Maps:
Lack of integration with the rest of the operating system.
Nokia maps is an island to itself. I click an address from a people hub contact, it opens Windows Phone Maps, not Nokia Maps. I click an address from a Bing search which has a dedicated hardware button, and it opens Windows Phone Maps, not Nokia Maps. I click an address in an email, you guessed it, it opens Windows Phone maps.
Searching in Nokia Maps has terrible results
I tried 3 searches today during the course of my day using Nokia Maps. Si Laa is a Thai restaurant I wanted to try a few miles from my house with an obviously unique name. The results I get are for Joyce Cafe, something in Germany and Slovenia. Next I try UPS in hopes it will find the company owned UPS Customer Center a quarter mile from my house. It doesn't find it at all, but does find alot of ups stores that are further away. Nokia Maps choked on another restaurant in Beverly Hills I tried to find. My wife's iPhone 4s had no problems with any of these.
Local surface street traffic info is missing
On the Lumia 920, I don't get any surface street traffic info. I do see traffic info for highways and freeways. I used to get local surface street traffic on my old HD7S Windows Map. Windows Phone Maps shows some sporadic street traffic data but is not quite right either. I called Nokia about it, the level 1 techie tells me I'm supposed to have local street traffic but has no further explanation and has escalated to level 2...
Nokia maps loads much slower and is much choppier when panning the map than Windows Phone Maps. Nokia maps takes 4 seconds to load vs pretty much instant for Windows Phone maps.
Nokia removed the Windows Phone Maps.
As shown in the first point above, Windows maps is still clearly on the phone, but Nokia removed the tile for it from the app list. There doesn't seem to be a way to directly open Windows maps. This would be fine except that Nokia maps, which is supposed to be the replacement, is worse than Windows Phone maps in so many ways.
There isn't a single reason to use Nokia Maps over Windows Phone Maps.
The only reason I use it is because Nokia has seen fit to make Windows Phone Maps hard to start by removing it from the apps list.
Nokia should leave both map apps available and in the apps list. If Nokia Maps is the superior product, people will use it. Right now I'd say Windows Phone maps is superior in every way and that's a shame for Nokia who is supposed to be a mapping leader...
|
OPCFW_CODE
|
🌹 这个应用程序可以在 ChromeFK 或 Google Chrome 网上应用商店进行下载。 ChromeFK上的所有插件扩展文件都是原始文件,并且 100% 安全,下载速度快。
23 March 2019: Along with the announcement of new emojis f Unicode 12, Douros updated Symbola. 20 June 2018: Earlier this month, Unicode 11 was released, along with it, new emojis; afterward, Douros updated Symbola to suppt the new emojis. 12 February 2018: Shtly befe the announcement of the new emojis f Unicode 11, Douros updated Symbola. 2 December 2017: Around the time when the Windows 10 Fall Creats Update was released, Douros updated Symbola to suppt even me emojis, I did not notice this until today. 13 July 2017: Douros has updated Symbola a little earlier than I expected, so I could update the extension befe Wld Emoji Day; also, the Windows 10 Fall Creats Update will get a better emoji input method, so the EmojiOne Keyboard will no longer be useful, unless you prefer the way they draw the images. 17 July 2016: F Wld Emoji Day 2016, I have updated the copy of Symbola used in the extension so that it now suppts Unicode 9.0; ftunately, Gege Douros did update the font after all, so that I wouldnt need to use the much bulkier SVGinOT fonts based on Twemoji EmojiOne, which are less crisp at higher resolutions in their blackwhite versions, which Chrome falls back to because only Firefox suppts col SVGinOT. Speaking of EmojiOne, if you prefer seeing graphical emojis just want a good input method, use the EmojiOne Keyboard extension; consider combining its input method with this extensions use of Symbola, by turning off AutoReplace in the EmojiOne Keyboard settings. 12 October 2015: Now the extension uses me lazy function definitions, also ES6 Symbols ES5 property definitions, to further isolate the effects of this extension from dinary page scripts; also, the regexes used now skip over most Japanese text CJK Unified characters, I have provided a framewk f skipping over me astral characters when figuring out which ones are probably emoji (using a regex series of regexes to find emoji directly has proven to be too slow). In hon of ? ? ? Wld Emoji Day, 17 July 2015 ?, I have backpted some perfmance improvements from the UserScript to this extension, but I think this will be the last version of Emoji Polyfill ?; however, even if Chrome finally suppts emoji fallback f the Emoticons block, I will keep it around f the benefit of older Chromiumbased browsers. ? Chrome 42 f Windows suppts native emoji fallback f most emoji ranges, notably excepting the emoticons; this extension will not be wked on much, but it will stay f users of browsers based on older versions of Chromium. Chrome f Windows still does not perfm emoji fallback as other browsers do, as Chrome recently started doing on the Mac, as all browsers (including Chrome on Windows) do f various scripts; f example, web browsers on Windows fall back to Sylfaen f Gegian Armenian text if the declared font does not have those characters, modern browsers other than Chrome fall back to Segoe UI Emoji (Windows 8+) Segoe UI Symbol (Windows 7+) if the declared font does not have emoji. This extension remedies that, by adding a few fonts to the end of every fontfamily property f every HTML element detected as probably having emoji; specifically, it adds "Segoe UI Emoji, Segoe UI Symbol, Symbola, EmojiSymb !imptant" bundles Symbola as a webfont f those who may not have it the special Segoe UI fonts (f example, users on Vista, XP, Linux). If you would prefer a UserScript, it is available on Greasy Fk under the same name as this extension (Emoji Polyfill), although it wks a bit me slowly. As the sht description says, this is a heavily modified fk of a me fullfeatured extension by Locomojis called Chromoji, which I have heard will be coming back to the Web Ste soon. If you want graphical Apple Android style emoji, a convenient way to input emoji, wait f Chromoji to come back (it will be linked from here when it does); f now, if you want to input emoji, try the virtual keyboard in Windows 8+, go to GetEmoji Emojipedia. Notable related extensions include the nowdiscontinued "Chromoji Hangouts Edition" (using Google Hangouts style emoji) the stilldeveloped "Twemojify" (using Twitters own API to replace suppted emoji with Twitters graphical emoji on all sites, not just Twitter). This extensions icon promotional tile use the coled glyphs from Segoe UI Emoji, while the extension itself only suppts monochrome emoji; this is a limitation of the platfm (Internet Expler Firefox suppt col in Segoe UI Emoji, Firefox suppts it even in older versions of Windows), as is the lack of suppt f skintone modifiers f emoji (currently, you will see the unmodified monochrome emoji followed by an "invalid character" symbol, which is the combining character f the skintone modifier). Additionally, flags are not well suppted, displaying as twoletter country codes instead of national flags, even f the initial set of 10 flags that shipped with most Japanese emoji sets; also, keycap emoji are not fully suppted, some being rendered as a digit in the nmal font followed by the combining keycap character in the emoji fallback font, which doesnt combine properly because the font metrics are different. It seems as if Segoe UI Emoji has suppt f just the Unicode 6.0 emoji befe Windows 10, while it suppts all Unicode 7.0 emoji on Windows 10, Symbola currently suppts the Unicode 8.0 emoji but none of the Unicode 9.0 draft emoji; the screenshot is of Get Emoji, showing some of the Unicode 6.0 emoji symbols rendered with Segoe UI Emoji some of the new Unicode 7.0 emoji symbols rendered with Symbola.
4.下载 Emoji PolyfillChrome插件v188.8.131.52版本到本地。
|
OPCFW_CODE
|
I would qualify my Mario Kart: Double Dash playing these past few weeks as aggressive. I first picked up the game a while back when my sister was in town. We always played the old N64 game together and worked as a team to get gold metals in all the cups. We got the 50cc cups with not too much effort but the weekend came to an end before we made much progress on the 100cc races.
It took me a lot more time to get past that darn Star cup than i thought it would. I had to get accustomed to the feel of the different cars and weapons to find something that worked for me. I would occasionally have a good run, but it took a lot of practice to achieve the consistency necessary to get the gold. After two weekends of work, i finally got passed my roadblock.
I assumed that each new challenge would be incrementally as difficult, but that turned out not to be the case. Once i had achieved a certain level of skill, it turned out to be relatively straightforward to complete the "more difficult" races. Even the sixteen track cup only took two times through to get the gold. Once you know how to work the cart, it only takes a few laps to adjust to a new track.
The phenomenon of having to put in a lot of work to accomplish basic goals and then comparatively little effort to achieve more complex goals reminds me a lot about computer programming. Once you learn the general concepts involved with solving problems with computer code, you starting thinking like the computer. I've noticed that now that i've had experience with many different programming languages, i find picking up new ones to be fairly easy. To learn a new language, you just have to focus on the differences and sometimes the differences can be small. Many languages have similar ways of accomplishing a given task.
The revelation is not hard to come to when you compare Java to C#. In fact, most true object oriented languages have so many common that traits that generic design patterns have been developed to cover all of them. The only difference in implementation is in the syntax. More often when a programmer get the sense that there "must be" some way to accomplish a particular task, he find that he's right. Usually the answer can be found by just knowing which part of the documentation the answer will probably be in.
These idea were reinforced by the the MSDN presentation on designing progressive APIs. It's nice to know that some people put together classes with the idea that someone should be able to make an educated guess as to what a class might be named or how it might work and should be reasonably confident they are making the right choice. Thanks to the wonders of Intellisense , it's often not necessary to read documentation any more.
Ok, wow. I guess i should probably stay away from Gamecubes and computers for the next few days before i just go completely nuts. I'm going to bed.Posted by Matthew at March 14, 2005 11:08 PM
|
OPCFW_CODE
|
I never thought that I was gonna write code for an ASP.NET Webforms project again. I’ve been creating all my projects in ASP.NET MVC since it hit beta. But, I guess Webforms is really good at some stuff, Coming up with quick and dirty web apps :)
Alright, So, I’ve got a web application which was created in ASP.NET MVC. The administrators of this web app want to control when the site is up. It’s basically a webapp for processing some stuff which happens only at day time. So, they don’t want the users (their agents) to go around and mess with the site during the night time or as soon as they get out of the office. We already had a nice web app with an administrator console through which the admins can do a lot of stuff like adding/de-activating users etc,. The administrators wanted a feature through which they could bring the site down or up using their admin page. A simple search showed up a lot of interesting articles, This one by Rick Strahl documents how you can take down an app, and bring it up without using “app_offline.htm” files. I didn’t want to take that route, because I think the app_offline way is the best, because it shuts down the application and unloads the application domain, You can find some more interesting info about it in this post by Scott Gu. Now, the downside of using this kind of approach is that, once the application is down, there is no way to bring it up using the same site.
Now, let’s see how we can use this nifty little feature which is available for asp.net 2.0+ apps to come up with an admin page through which we can take the site down or bring it back up.
1) The first thing you need to do is create a basic Webforms website or Web application from Visual Studio. On the default page you would want a few textboxes, radio buttons and a submit button, which looks like
2) This is the fragment of code which creates this GUI, Nothing too fancy.
3) Once we have that, it’s basically an exercise in creating some code which does the user/password matching and creates an app_offline.htm file in the right location.
The code is pretty much self explanatory, Hope it helps someone. You just have to be careful about a couple of things:
- You still have to run this on a different website
- You need to give the right privileges on the directory APP_OFFLINE_PATH to the user which runs the IIS worker process. For IIS7 this is the NETWORK SERVICE user by default.
And before I forget, You can get the source code for this project from my git repository hosted on github at http://github.com/minhajuddin/blog_demos/tree/master
|
OPCFW_CODE
|
In now’s modern globe of unlimited knowledge and ground-breaking technological innovation, it is crucial to remain from the know. Keeping up… Study additional…
You must post an buy to Obtain your referral code. This code will likely be distinctive for yourself and might be shared with your buddies. Earning Funds
Most of the Internet sites has their server-side courses penned in PHP. It effortless to grasp and simple to use language, but it's much more at risk of Website assaults. 1 should be very careful whilst composing PHP code. Secondly, In addition, it lacks multithreading in the Main degree. Despite these vulnerabilities and missing options, it is rated as considered one of the most well-liked common programming languages on the planet. Most of the web content administration techniques are prepared using PHP.
A programmer has to put in writing a whole lot to accomplish the exact same outcome established in C++. You will discover numerously inbuilt capabilities which make the life of a programmer quick. Next, Java homework help offers A great deal functionality like Generics, swings that aren't provided by C++. Java continues to be elusive for producing Running devices however. There's a large trade-off involving the velocity and complexity although crafting Java code. C is a far better-suited programming language for composing an operating method when compared with Java. The first motive may be the efficiency and pace edge supplied by C. Few additional Preferred Programming disciplines where you usually takes support
Welcome to the world of programming and learn about programming. When you are struggling with the homework of Java, C, C++ or another programming language, then our professionals can be obtained to help you at any time. We are helping with programming assignments and projects demanding intensive use of object-oriented principles. Why college students facial area challenge in programming assignments? There is a essential dilemma with the Students pursuing masters in Computer system science or another bachelor's class in the sector of computing. They see every single programming course function being a theoretical just one. If you're just looking through the theoretical principles with none concrete implementation, it can be tough to get hold of programming. Battle commences Using the insufficient programming apply and finishes within a bad quality. The purpose that we are trying to produce Here's the significance of the apply once we speak about programming topics. You can easily grasp the ideas of programming.
All Assignment Help experts are extremely certified and well versed from the use of programming languages, and we usually look ahead to helping you in tricky matters specified down below:
Any people who are not that snug with coding but who are interested in Equipment Finding out and wish to apply it effortlessly on datasets.
Do C++ programming Assignments at ease: For starters, understand the difference between assignments based upon C and C ++ programming. An essential big difference to recall and understand is the way in which these two programming languages treats the true earth. C++ programming assignments are dependant on the principles of objects, which hovers around the concepts of data encapsulation, polymorphism, information hiding, inheritance and much more. What makes it diverse with the procedural or structural language is the usage of courses, approaches and abstraction.
Within an obnoxious world the place dynamic Web sites and effective computer software offers are in demand, relying upon C++ or Java appears to be foolishness. C++ and Java are really efficient and capable programming languages Nevertheless they stand nowhere in front of modern day programming languages like Python.
Meta Stack Overflow your communities Join or log in to customize your record. extra stack exchange communities company web site
Help with PHP programming: Here is the server side scripting language created and suitable for web advancement.
Suggestion: Even when you download a All set-made binary in your platform, it makes sense to also down load the source.
I used to be panicking about my internet marketing plan homework that was owing on an exceptionally brief deadline. I was provided a sample from my professor, but I could not do just about anything constructive. Finally, I discovered allassignmenthelp.com for my assignment help. Right after examining a particular opinions on Australian Web-sites, I put my rely Visit Website on in allassignmenthelp.
Handful of of my good friends from Holmes Institute, Australia advised allassignmenthelp.com for assignment help support. To my shock, excellent of work completed was past my expectation. Tutor worked in accordance with the desire of the assignment. I have suggested you men to a lot of my clasmates because then.
|
OPCFW_CODE
|
derivation of "riddled" as in "riddled with bullets"?
How did "riddled" come to be used as in the phrase "riddled with bullets"? Most definitions only include the puzzle type meaning.
"Riddled" in this meaning probably derives from the tool called a riddle, which is certainly riddled with holes:
According to Wiktionary, its etymology is:
From Middle English riddil, ridelle (“sieve”), from Old English hriddel (“sieve”), alteration of earlier hridder, hrīder, from Proto-Germanic *hridą (“sieve”), from Proto-Germanic *hrid- (“to shake”), from Proto-Indo-European *krey-.
This is indeed the origin ascribed by the OED: To fill with holes, like those in a riddle; to make holes throughout, esp. by means of bullets or other ammunition; (also) to make (holes) in something, examples mostly from the early 19th century onward.
How about "riddle with cancer"? Is this the same?
As EL&U member 'rand al'thor' says, a 'riddle' is an old name for a sieve, and by extrapolation 'riddled' gained the meaning 'to put holes in something' (like a sieve). The Oxford English Dictionary suggests different origins for the word 'riddle' in the sense of a puzzle, and in the sense of sieved or punctured. Firstly, the puzzle sense:
riddle
I.riddle, n.1
(ˈrɪd(ə)l)
Forms: α. 1 rǽd-, rédels, 4 redilis, 4–5 redel(e)s, 9 dial. ridless. β. 4, 6 redele, 4–5 redel, redil, 6 readle, redle, reedel, reedle. γ. 4–6 rydel, 6 ryddel(l, ryd(d)le, 4 ridil, 5 ridel, 6 riddel, ridelle, ridle, 6– riddle.
[OE. rǽdels masc. and rǽdelse fem., counsel, opinion, conjecture, etc., also a riddle, = Fris. riedsel, MDu. raetsel (Du. raadsel), OS. râdisli neut., râdislo masc. (MLG. râd-, rêdelse, rêdesal, LG. radsel), OHG. râdisle (MHG. ratsel, retsel, etc., G. rätsel), f. rǽdan to read or rede: see -els.]
a. A question or statement intentionally worded in a dark or puzzling manner, and propounded in order that it may be guessed or answered, esp. as a form of pastime; an enigma; a dark saying.
Now, the OED on the sense of a sieve or something 'full of holes':
II.riddle, n.2
(ˈrɪd(ə)l)
Forms: 1 hriddel, 4 riddil, 4, 6 riddill, 7 riddell, 6– riddle, 7, 9 dial. ruddle; 4 ridelle, 5 ridil, 6 redell, 7 ridle; 4 rydil, 5 ryddyll, rydyl, rydelle, 6 ryd(d)le.
[Late OE. hriddel: the earlier form is hridder ridder n.1]
a. A coarse-meshed sieve, used for separating chaff from corn, sand from gravel, ashes from cinders, etc.; the most usual form has a circular wooden rim with a bottom formed of strong wires crossing each other at right-angles.
The OED mentions that an early version of the word for 'riddle' (in the sense of sieve) is 'ridder', and has this to say about that word:
ridder
▪ I.ˈridder, n.1dial.
Forms: 1 hrider, hridder 5 rydder, erron. rydoun, 7–9 dial. ridder, rudder, ruther.
[OE. hrider, later hridder, from a stem hrid- to shake (cf. hriðian to shake with fever), an ablaut-variant of which is represented by OHG. rîtera, rîtra (MHG. rîtere, rîter, G. reiter), and more remotely by L. crībrum, Ir. criathar. In later Eng. the more usual form is riddle n.2]
A sieve or riddle.
So two words in Old English, 'hrider' (to shake), and _'rǽdels' (counsel, opine or conjecture), lead to two words that are written and pronounced the same way, but have different meanings, 'riddle' (as in sieve), and 'riddle' as in puzzle.
What is interesting to note is that 'riddle' (as in puzzle) comes from an OE word that also led to the English word 'read', but as the OED suggests, the original meaning of that Old English word was actually 'counsel' or 'conjecture' (in the sense of 'puzzle out') , as explained here:
III.read, v.
(riːd)
Pa. tense and pa. pple. read (red). Forms: inf. 1 rǽdan, (-on, ræddan, north. reda, reða), 3 ræden(n), raden, 2–4 reden, 5 redyn; (and pres.) 2, 4 rade, 3–6 rede, 5–6 reede, Sc. red, reid, 6 (8 Sc.) reed; (3) 6–7 reade, 6– read. (Also 3 sing. pres. 1 ræt, 2–4 ret, 3 red, 3–4 rat.) pa. tense 1 pl. reordun; 1 rǽdde, 3–4, 6 radde, (4 rade), 4, 6 rad, (4 rat); 1 pl. red(d)on, 3, 6 (9) redd, 4 redde, 4–6 rede, 4–6 (7–8) red, 7– read. pa. pple. 1 rǽden, 4 reddynn, 6 readen; 1 rǽded, 3–4 redd, 3–6 redde, (4 radde), 3–6 (7–8) red, 4 rede, 6 reed(e, 6– read; 1 ᵹeredd, 3 ired, 3–4 irad, 4 iredde, yrade, 4–5 iradde.
[Comm. Teut.: OE. rǽdan = OFris. rêda, OS. râdan (MLG. raden, MDu. and Du. raden), OHG. râtan (MHG. râten, G. raten, rathen), ON. ráða (Sw. råda, Da. raade), Goth. -rêdan:—OTeut. *ræ̂đan, prob. related to OIr. im-rádim to deliberate, consider, OSl. raditi to take thought, attend to, Skr. rādh- to succeed, accomplish, etc.
The Comm. Teut. verb belonged to the reduplicating ablaut-class, with pa. tense *rerōđ and pa. pple. *garæ̂đono-z, whence Goth. -rairôþ, *-rêdans, ON. réð, ráðinn, OHG. riat, girâtan (G. riet, geraten), OS. ried or rêd, *girâdan (Du. ried, geraden). The corresponding forms in OE. are reord and (ᵹe)rǽden, but these are found only in a few instances in Anglian texts, the usual conjugation being rǽdde, ᵹerǽd(e)d, on the analogy of weak verbs such as lǽdan: cf. MLG. radde, redde, Sw. rådde, and G. rathete (for usual riet), Da. raadede. The typical ME. forms are redde or radde in the pa. tense, and (i)red or (i)rad in the pa. pple.; in the later language (from the 17th c.) all tenses of the verb have the same spelling, read, though in pronunication the vowel of the preterite forms differs from that of the present and infinitive. Individual writers have from time to time denoted this by writing red or redd for the pa. tense and pa. pple., but the practice has never been widely adopted.
The original senses of the Teut. verb are those of taking or giving counsel, taking care or charge of a thing, having or exercising control over something, etc. These are also prominent in OE., and the sense of ‘advise’ still survives as an archaism, usually distinguished from the prevailing sense of the word by the retention of the older spelling rede. The sense of considering or explaining something obscure or mysterious is also common to the various languages, but the application of this to the interpretation of ordinary writing, and to the expression of this in speech, is confined to English and ON. (in the latter perhaps under Eng. influence).]
Having said all of that, the credit for the correct answer should go to EL&U member 'rand al'thor'.
|
STACK_EXCHANGE
|
Your learning journey in a nutshell
Learn how to wireframe your website and develop it using HTML.
Ideation and Wireframing
Turn your website idea, be it e-commerce or a blog, into an actionable wireframe.
Developing the Skeleton
Learn how to use HTML and the basics of CSS to stylize the website.
Understand how to design your website with the help of Bootstrap.
Deploying a Database
Handle user data, design and deploy your database Using MySQL.
Making your Website Functional
Learn the fundamentals of PHP and make your website functional using it.
Deploy your Website
Understand how to use GoDaddy and GitHub to deploy your very first website.
Forbes declared "web development" as one of the 10 high-paying tech degrees with more than 624 million internet users in India
The Bureau of Labor Statistics predicts 15% growth in the employment rate for web developers
Web development technology has made it easier to maintain transparency and avail better services
Average salary of an early web developer varies from 3.5L-6.5L p.a
Learn from the best
Through this online course, explore the field of web development
2 Live Classes
2 Live Classes
2 Live Classes
Completion of our courses will provide you with a Certification of Completion.
We have helped over 1,20,000 mentees step into their desired fields of passion. Here is what a few of them have to say:
With the help of this course, I was able to understand everything about web development, right from the basics! Through the LIVE sessions with the mentor, we were able to understand certain concepts in a better way. There were also doubt-clearing sessions held that helped us all understand the topics taught and get answers to any questions that we had. Also the course content provided on the app was very fun to go through and very engaging as it involved videos, and also mini-quizzes at the end of every lesson to assess what we had learned. And now I am able to create my own website, which is only because of MyCaptain. It was a very nice experience overall.
The Captain I had for this course, Captain Aquib, really helped me grow a lot throughout the duration of this course! It was a great experience for someone like me who is a complete beginner! I had really thought it would be difficult to catch up with the others in the batch as they seemed to be of a more advanced level, or at least had some background knowledge in this field, but my Captain made it possible! Thanks to him, I can now say I know how to create an entire website- including both the front end and the back end!
I totally enjoyed and learned a lot through this Web Development course in a fun and nice way. All the course content was very relevant and well presented. The Captain for our batch was very inspirational and motivational but fun at the same time. He made all the topics sound very interesting and helped us with our doubts as well. He was also prompt in sending the Letter of Recommendation when asked. It was good, learning with good Captains who have great knowledge about the subject.
Loved by 2,50,000+ Learners and 1000s of organisations.
We might have already answered them here:
|
OPCFW_CODE
|
Having a familiarity with the .NET framework will help you with VB.NET and C#, but I'd hate to think of a VB programmer trying to do manual memory allocation and clean up for structures etc. in plain ol' C.
I have derailed the thread! hahaha! Take that Apple!
I mean - sorry guys!
I wrote my first multi-threaded application a few years back in c#, I'm surprised by how much I enjoy the language - although professionally I've not "officially" worked in it - I made a tool to do data dives from Oracle of archived data that was on IMS for any future legal/other needs to extract specific groups of data (of which there are gigabytes upon gigabytes). I wasn't too happy to basically do all that work and hand it over to an outsourced group from India - but that's life.
Yes, lets get back to iPad help:
1. Return to apple store. 2. Demand Refund. 3. If no refund goto 2.
The Night Sky app now has a Santa Tracker option for Christmas Day! Follow him through the sky!
That is progress for you - when I was a nipper we had to rely on the National Air Traffiic Service and NORAD for updates on Santa's progress. Now we have the technology that we can all check for ourselves.
I am told that one of our number now has an Ipad...
And four and a half hours of work, cursing and tinkering still can't actually use it. The up-to-date version of iTunes has failed to install on two different computers in this house. (I know why it won't install on the older Mac, but the other one I cannot fathom out why) and as a result I am unable to use my purchase. I am therefore going to prevail upon a friend of mine down the road to borrow his wireless internet connection this evening, in the hope that I can use that to get it to complete the set-up process.
Failing that my options appear to be, try to find somewhere with free Wi-fi I can use to carry out the process, buy and kludge a wireless router into my existing wired router/modem, or accept that I have spent something in the region of four hundred pounds on a paperweight. In the meantime it sits their mocking me with its 3G connection that I can't use because Apple won't even allow me to turn it on properly.
Hmm. Just had a sit down with a comic and a drink (non-alcoholic), cooled off a bit, and thought of something else that I might be able to try. Back to the laptop before I go out again I think. Might have another tweak I can throw at the problem...
I have far from given up hope. I am determined to get my shiny black/silver device up and functioning.
I was under the impression now that they could be set up straight out of the box now without another machine to be used, as they can now wirelessly sync with a computer if need be.
I suspect that the New iPad probably can. This is the legacy iPad 2 model, which fitted better into my budget.
I have made progress of a sort, I dug into the laptop and temporarily killed one of the security suites - which turned out to be what was stopping iTunes from initiating. So I have now got the current release of iTunes up and running on the laptop. The iPad however just sits there on the end of the cable connection stating "connecting to iTunes" and does... nothing of any note.
I am away to my friends for the evening in a few minutes and I will have a go and see if I can get it kicked into life over their Wi-Fi. If not I still have a few thoughts in mind as to what to do next. (Other than scream, shout and grumble. I've done all that already and got it out of my system.
|
OPCFW_CODE
|
Visualizations of your social data can be extremely helpful in developing marketing strategies, but how can you go about creating these graphs?
Google Knowledge Graph, the Facebook Graph, and the Social Graph — today’s digital marketers and brands love to throw these terms around like they are ancient, tested, and proven concepts that everybody grasps and understands. However, the truth is often very different.
Recently, one of our clients asked me to show him his brand’s "Social Graph." Not being a social expert, I had to do a lot of research and internal inquiries to find some ways to actually "show" him. Turns out it is a little more complex than people often expect. It’s not a simple downloaded Excel sheet you can get that shows your "graph." Instead, Social Graphs are basically a network of nodes and edges — of entities and the connections between them.
Nodes and edges? Confused? I was, too, which is why I want to share a simple way to analyze your social graphs or networks in order to better understand and visualize them. We will focus today around analyzing Social Network Graphs, but this approach can be used for any type of network data like link networks and website structures.
Before we dive head-first into one of those "fascinating" screenshot-powered, step-by-step guides, I want to quickly address the data concepts behind graph visualizations. Most graphs are powered by a two-dimensional data system consisting of two core items: nodes and edges.
Nodes are the entities we are evaluating (People, Pages, Handles, Groups, etc.) and edges are the connections between them (Likes, Following, Friendships, etc.). Most of the network data today is handled via GraphML files or .gdf (graph data format) files. Basically, these are simple text files that contain a list of all the nodes and the relationships/edges between them.
Several tools can visualize network data and the most exhaustive list I know of can be found here. In today’s examples we will use Gephi to visualize our social data. Why Gephi? It’s free, open-source, cross-platform, and easy to use. It also has one of the most appealing visual outputs compared to some of the other tools out there.
There are thousands of ways to extract social network data like the native APIs, Custom Applications, Excel tools, and more. One of the easiest for data extraction AND visualization is NodeXL. It allows for fairly effortless extraction from multiple networks (YouTube, Twitter, etc.) straight out of Excel. It also allows you to visualize and customize them directly inside Excel.
For Facebook data (and for today’s example) I’ll use an actual Facebook app called NetVizz. NetVizz allows you to export your own "Friend" and "Like" networks as well as "Page Like" networks. Examining Page Like Networks is a great way to analyze audience affinities and learn more about target audiences by understanding common interest and connections.
OK, first we have to determine what we want to analyze. Let’s say we are thinking about having a booth, sponsoring, or speaking at the next ClickZ Live conference and we are trying to determine some of the common interests among attendees. What else do they care about, what do they read, or who are they connected with? Having these insights will help us to better understand the audience and determine how and where to communicate with them.
Getting the Data
The first thing we would need to do is to get the Like Network for ClickZ Live. In order to do that, we need to find out the numeric Facebook ID for the conference. The easiest place to get it would be http://lookup-id.com; it’s a free (ad-supported) site that allows you to enter a Facebook URL and in return get the ID. Once you have the ID extracted (in our case, 82891330657) we would go to the NetVizz Facebook page and select "Page Like Network."
Now simply enter the numeric Facebook page ID of your choosing and select a depth of 2. This will take some extra time, but it gives you a broader set that goes to second-level likes.
After a few minutes of crawling you will see a link that allows you to download a GDF network file.
Note: If you are downloading data for pages with millions of likes, this can take a few hours. But since this is a server side crawl, you are able to have multiple crawls running simultaneously.
Importing the Data
Once you have downloaded your .gdf file, start up Gephi and import it via File->Open. On the Import report, just leave the default options and click "okay." You will be presented with a somewhat odd-looking bunch of lines and dots.
Enhancing the Data
One advantage of Gephi is the easy-to-use implementation of mathematical operations. For our data, we want to do two things.
- Click on "average path length" on the right-hand side. This will calculate the distance and betweeness centrality of our nodes (their centrality within our chosen network). It will allow us to understand their importance relative to the other nodes. Once the calculation completes, just click "close."
- On the right-hand side, run Modularity. Modularity uses a community detection algorithm that allows us to group related nodes together (we will color code them). Click "close" once the calculation is completed.
After you run both of these, nothing will change visually, but we can now perform operations against these calculations.
Visualizing the Results
This is the fun part. Now that we have run our calculations, let’s start by sizing the nodes. On the top left side select the Nodes tab, then select the diamond icon (size) and choose "betweeness centrality." The minimum and maximum sizing depends on the size of your set; for this small example I would recommend minimum 10 and maximum 50. Choose "apply" and you should see that the nodes have adjusted their sizes.
Next, choose the Partition tab in the top left corner. Then select "nodes" and hit the green arrows in order to refresh the options. You should see the Modularity class option. This is the data we got from our community detection algorithm. Once you select this and hit "apply," the nodes will be colored based on the results of our community detection algorithm, according to their common attributes and relation to each other.
Now let’s give our results that awesome look. Underneath the Partitions and Ranking window on the left is a Layout option. This allows you to use different algorithms to lay out the nodes and edges. The best one for this type of data is Force Atlas. Simply select it, check "prevent overlay" and press "apply." You should be left with a view similar to mine below, which clearly displays the major and minor nodes as well as the connections between them:
But what are they? Use the three little icons highlighted above in yellow to reveal your metrics: The first one will show the labels; in the second one use the dropdown and choose node size; then use the slider (third one) to find a fitting size.
At this point it should like this:
There are a ton of adjustments you can make to sizes, colors, etc. to graph your data and see what’s really happening in a brand’s social network but this is not bad for five minutes of work. Now you can start to zoom in, move and highlight nodes. As an example, when I hover over the ClickZ Live node, I can clearly see the biggest affinities:
Playing around with the data a bit reveals some interesting connections. During this exercise, for instance, I discovered some patterns from pages that indicate they either paid for their likes or made all their employees like their clients’ pages (but I won’t call them out publicly; can you find them?).
Another insight from my ClickZ Live example is that comScore is the biggest common denominator outside of ClickZ’s own properties.
There are countless deeper analysis models you can apply in Gephi and build on the existing data such as PageRank and Clustering.
The visualization below is what I eventually sent to my client to show him his brand’s social network. I generated it using Force Atlas, Page Rank, and Modularity and then added some transparency in the Preview Dialog.
I hope this inspires you to perform this type of visualization and get some great insights into your brand’s graph data. Questions? Feel free to message me at @nxfxcom.
|
OPCFW_CODE
|
Google has published a number of tools useful for the hiring process, including advice on creating a job description, preparing the interviewer, best practices for interviews, and others.
Amazon has updated their AWS Well-Architected Framework (PDF) based on feedback from clients, adding a new pillar, Operational Excellence.
Mozilla has launched their website security analysis tool. Dubbed Observatory, the tool helps to spread information on best security practices to developers and sys admins in need of guidance.
GitHub’s Phil Haack hosted a panel on Channel 9 that focused on best practices for open source projects.
This post explains the best practices for becoming great and successful remote developer.
Network performance, virtualization and testing are some of the considerations to address performance and scalability issues with NoSQL databases. Alex Bordei wrote about scaling NoSQL databases and tips for increasing performance when using these data stores.
Hackathons are events where developers work together during a fixed period to collaboratively develop software. They provide learning opportunities and space for developers and organizations sponsoring the hackathons to network and have some fun.
Organizations are looking for ways to do continuous change to increase their agility. There’s an interest in practices that managers can use to make change happen in their organizations. InfoQ interviewed Jason Little about his book on lean change management, what inspires him, and on using options and innovative practices in change.
Apiary, the company behind API Blueprints has announced a new offering, Apiary for Enterprise, that promotes API design best practices through tooling that validates API designs against defined API style guide standards and best practices. InfoQ caught up with Apiary to shed more light on this new offering.
Google has published a number of guidelines and boilerplate code for cross-platform responsive website design.
Mobile Backend as a Service provider AnyPresence continues to hone their chops. Launching the fifth update to their self-titled platform geared for the enterprise. Co-founder Rich Mendis provides some insights for InfoQ readers…
Thoughtworks recently released a new installment of their technology radar highlighting techniques enabling infrastructure as code, perimeterless enterprises, applying proven practices to areas without, and lightweight analytics.
Organization mostly do an agile transformation for a whole team, project, or organizational unit, given that agile is a team driven approach. But there are also professionals who start using agile practices individually, or who are working agile as a one person team. How can individuals adopt agile, and what kind of benefits can it give them?
The purpose of backlog grooming is to keep the product backlog up to date and clean. Different approaches are used by product owners and teams to do this.
Early results of a study on the effects of agile development practices are showing improvements in productivity and quality. These results aim to answer questions on development projects schedules and budgets. They also provide insight in the results of outsourcing and co-located teams.
|
OPCFW_CODE
|
Public key decryption?
In the article How Does the Blockchain Work? the writer makes the following statements:
Since only you should be able to spend your bitcoins, each wallet is protected by a special cryptographic method that uses a unique pair of distinct but connected keys: a private and a public key.
If a message is encrypted with a specific public key, only the owner of the paired private key can decrypt and read the message. The reverse is also true: If you encrypt a message with your private key, only the paired public key can decrypt it. When David wants to send bitcoins, he needs to broadcast a message encrypted with the private key of his wallet. As David is the only one who knows the private key necessary to unlock his wallet, he is the only one who can spend his bitcoins. Each node in the network can cross-check that the transaction request is coming from David by decrypting the message with the public key of his wallet.
Specifically this: If you encrypt a message with your private key, only the paired public key can decrypt it.
Is it true that you can encrypt a string with a private key and only the public key can decrypt it? I was aware of the reverse, obviously, but this just doesn't seem right.
Indeed the text quoted is wrong; at the very least, by using incorrect vocabulary. That should be: if you sign a message with your private key, the paired public key can be used to verify the signed message's integrity and origin.
What small amount of truth there is in the original statement boils down to: in some asymmetric cryptosystems, including RSA¹ (but not including ECDSA used in bitcoin and many other protocols), the sign operation includes a step similar to a step used in encryption, except with the private key instead of the public key; and that's undone in the verify operation, which includes a step similar to a step used in decryption, except with the public key instead of the private key.
¹ And then not the variant of RSA most used in practice for performance reasons, which uses the Chinese Remainder Theorem in private-key operations. That has no equivalent for public-key operations, and uses the private key in a form that makes it not interchangeable with the public key. That makes the twist in the text quoted unworkable.
Nitpick: RSA sign shares a substep with decryption, verify with encryption.
@SAI Peregrinus: Your statement is simple and correct. But it does not parallel the text quoted in the question, which compares signing to encryption, and verify to decryption. Hence my carefully pondered wording which is the less hairy I managed to get that is correct and pairs things as in the text quoted in the question.
CRT is mostly a red herring. Even if you represent the private key as the public exponent, RSA can work functionally with inverted keys, but it's trivially insecure, since given a private key $(n,d)$ you can deduce the public key $(n,e)$ by guessing that $e$ is 65537 or 3.
@Gilles'SO-stopbeingevil': I'm pretty sure we can generate an RSA keypair with two large private keys.
@Gilles'SO-stopbeingevil' : my point for a note introducing the CRT in the context of the question is: CRT is typically used for RSA decryption, yet it can't safely be used when doing an RSA signature verification step (even if the public exponent is huge, which can be as pointed by Joshua). Hence making a parallel between signature verification and decryption (as done by the text quoted in the question) fails not only for ECDSA, but also for RSA as practiced.
@Joshua In theory, yes. With most RSA implementations, no: $e$ is often limited to a small range.
I'd say the concept of "encryption using a private key" has a contradiction in terms. Encryption requires making some plaintext into secret ciphertext, but since it can be decrypted using a public value (the public key) it's by definition not secret ciphertext! The question as asked is thus meaningless. fgrieu's answer is good at disambiguating it, just wanted to note the substep difference in practice.
|
STACK_EXCHANGE
|
<?php
use Corcel\Post;
use Corcel\Page;
class PostTest extends PHPUnit_Framework_TestCase
{
public function testPostConstructor()
{
$post = new Post();
$this->assertTrue($post instanceof \Corcel\Post);
}
public function testPostId()
{
$post = Post::find(1);
if ($post) {
$this->assertEquals($post->ID, 1);
} else {
$this->assertEquals($post, null);
}
}
public function testPostType()
{
$post = Post::type('page')->first();
$this->assertEquals($post->post_type, 'page');
$page = Page::first();
$this->assertEquals($page->post_type, 'page');
}
/**
* Tests the post accessors
* Accessors should be equal to the original value.
*/
public function testPostAccessors()
{
$post = Post::find(2);
$this->assertEquals($post->post_title, $post->title);
$this->assertEquals($post->post_name, $post->slug);
$this->assertEquals($post->post_content, $post->content);
$this->assertEquals($post->post_type, $post->type);
$this->assertEquals($post->post_mime_type, $post->mime_type);
$this->assertEquals($post->guid, $post->url);
$this->assertEquals($post->post_author, $post->author_id);
$this->assertEquals($post->post_parent, $post->parent_id);
$this->assertEquals($post->post_date, $post->created_at);
$this->assertEquals($post->post_modified, $post->updated_at);
$this->assertEquals($post->post_excerpt, $post->exceprt);
$this->assertEquals($post->post_status, $post->status);
}
public function testPostCustomFields()
{
$post = Post::find(2);
$this->assertNotEmpty($post->meta);
$this->assertNotEmpty($post->fields);
$this->assertTrue($post->meta instanceof \Corcel\PostMetaCollection);
}
public function testPostOrderBy()
{
$posts = Post::orderBy('post_date', 'asc')->take(5)->get();
$lastDate = null;
foreach ($posts as $post) {
if (!is_null($lastDate)) {
$this->assertGreaterThanOrEqual(0, strcmp($post->post_date, $lastDate));
}
$lastDate = $post->post_date;
}
$posts = Post::orderBy('post_date', 'desc')->take(5)->get();
$lastDate = null;
foreach ($posts as $post) {
if (!is_null($lastDate)) {
$this->assertLessThanOrEqual(0, strcmp($post->post_date, $lastDate));
}
$lastDate = $post->post_date;
}
}
public function testTaxonomies()
{
$post = Post::find(1);
$taxonomy = $post->taxonomies()->first();
$this->assertEquals($taxonomy->taxonomy, 'category');
$post = Post::taxonomy('category', 'php')->first();
$this->assertEquals($post->ID, 1);
$post = Post::taxonomy('category', 'php')->first();
$this->assertEquals($post->post_type, 'post');
$this->assertEquals(true, $post->hasTerm('category', 'php'));
$this->assertEquals(false, $post->hasTerm('category', 'not-term'));
$this->assertEquals(false, $post->hasTerm('no-category', 'php'));
$this->assertEquals(false, $post->hasTerm('no-category', 'no-term'));
$this->assertEquals('php', $post->main_category);
$this->assertEquals(['php'], $post->keywords);
$this->assertEquals('php', $post->keywords_str);
}
public function testUpdateCustomFields()
{
$post = Post::find(1);
$post->meta->username = 'juniorgrossi';
$post->meta->url = 'http://grossi.io';
$post->save();
$post = Post::find(1);
$this->assertEquals($post->meta->username, 'juniorgrossi');
$this->assertEquals($post->meta->url, 'http://grossi.io');
}
public function testInsertCustomFields()
{
$post = new Post();
$post->save();
$post->meta->username = 'juniorgrossi';
$post->meta->url = 'http://grossi.io';
$post->save();
$post = Post::find($post->ID);
$this->assertEquals($post->meta->username, 'juniorgrossi');
$this->assertEquals($post->meta->url, 'http://grossi.io');
}
public function testAuthorFields()
{
$post = Post::find(1);
$this->assertEquals($post->author->display_name, 'admin');
$this->assertEquals($post->author->user_email, 'juniorgro@gmail.com');
}
public function testCustomFieldWithAccessors()
{
$post = Post::find(1);
$post->meta->title = 'New title';
$post->save();
$this->assertEquals($post->post_title, $post->title);
$this->assertEquals($post->title, 'Hello world!');
$this->assertEquals($post->meta->title, 'New title');
}
public function testSingleTableInheritance()
{
Post::registerPostType('page', "\\Corcel\\Page");
$page = Post::type('page')->first();
$this->assertInstanceOf("\\Corcel\\Page", $page);
}
public function testClearRegisteredPostTypes()
{
Post::registerPostType('page', "\\Corcel\\Page");
Post::clearRegisteredPostTypes();
$page = Post::type('page')->first();
$this->assertInstanceOf("\\Corcel\\Post", $page);
}
public function testPostRelationConnections()
{
$post = Post::find(1);
$post->setConnection('no_prefix');
$this->assertEquals('no_prefix', $post->author->getConnectionName());
}
public function testPostTypeIsFillable()
{
$postType = 'video';
$post = new Post(['post_type' => $postType]);
$this->assertEquals($postType, $post->post_type);
}
}
|
STACK_EDU
|
A accomplished Technical Architect with over 10 years of experience in designing, developing, and implementing complex software systems and applications. Possess a strong expertise in various programming languages, cloud computing, and infrastructure management. A strategic thinker with an ability to design and implement scalable architectures to meet business needs.
Start your bullet points with action verbs like 'led', 'managed', 'developed', etc. This helps highlight your skills and abilities in an energetic and straightforward way.
As a technical architect, your role is crucial to any organization. You are responsible for designing and implementing complex technical solutions to meet business needs. Writing a resume that highlights your skills, experience, and accomplishments can help you stand out among other candidates. In this article, we’ll take a closer look at how to write a technical architect resume that showcases your expertise and abilities.
When it comes to resume formats, there are several options to choose from, including a chronological, functional, or hybrid layout. As a technical architect, a chronological resume that showcases your work history is the most effective format. Start with a brief summary of your professional experience, followed by a section on your skills, education, and certifications.
Technical architects require a specific set of skills that go beyond basic IT knowledge. You need to have a deep understanding of complex systems, networks, and programming languages. When writing your resume, highlight your technical skills by using bullet points or a table that lists the technologies you’ve worked with in the past. Some essential technical skills for technical architects include:
Technical architects are expected to have extensive experience in leading technical projects and collaborating with cross-functional teams. Make sure to showcase your experience in your resume by highlighting the projects you’ve worked on and the results you’ve achieved. Use specific examples to demonstrate how you’ve solved technical challenges, delivered projects on time and budget, and improved processes or systems.
Technical architects usually have a degree in computer science or a related field. Make sure to list your educational background in your resume, including the name of the institution, the degree earned, and the date of graduation. Additionally, technical architects are expected to have industry certifications such as:
Include any relevant certifications you’ve earned in your resume, as they can help you stand out among other candidates.
A summary statement at the top of your resume is a brief introduction to your experience, skills, and achievements. It can help recruiters quickly understand what you bring to the table. Your summary statement should be no more than three to four lines. Here’s an example:
"Technical architect with over 10 years of experience leading complex IT projects and designing technical solutions. Proficient in Agile and Waterfall methodologies, cloud computing, and enterprise architecture frameworks. Strong communication and collaboration skills with a proven track record of delivering projects on time and budget."
Writing a technical architect resume requires highlighting your unique skills, experience, and education. Make sure to choose the right format, highlight your technical skills, showcase your experience, mention any certifications you’ve earned, and write a summary statement that highlights your achievements. By following these tips, you can create a compelling resume that showcases your expertise and helps you stand out among other candidates.
Overly long resumes can make it difficult for hiring managers to find the most important information. Try to keep your resume concise and to the point, generally between 1-2 pages.
|
OPCFW_CODE
|
On Fri, Nov 22, 2019 at 5:51 AM H. Nikolaus Schaller <address@hidden
> Am 22.11.2019 um 11:20 schrieb Andreas Fink <address@hidden>:
>> On 22 Nov 2019, at 09:08, H. Nikolaus Schaller <address@hidden> wrote:
>>> Am 22.11.2019 um 08:40 schrieb David Chisnall <address@hidden>:
>>> On 22 Nov 2019, at 05:31, H. Nikolaus Schaller <address@hidden> wrote:
>>>> And the first thing I turn off in a
>>>> new Xcode project is ARC.
>>> Why? ARC generates smaller code, faster code, and code that is more likely to be correct.
>> I never had a problem with any of those three. Code correctness is rarely a retain/release problem but the algorithm.
>> So it solves only a small percentage of the coding problems I do face.
I once spent 3 days tracking down a memory issue I found in an iOS application I wrote for a client. Granted I got paid for this, but it was by no means the first or last memory issue with the app. The application was very complex and all of this would never have been an issue if ARC had been invented at the time... but this was early days. ARC has made things easier for me without a doubt. I understand memory management, but it can get complex at times. If the compiler can figure it out I prefer to let it do the work.
> Then you have not done any big projects.
I have summed up wc -l on all of my *.m files and the sum is 1434555 in 5656 files.
Not all are originally from me and there may be duplicates.
Doesn't seem to be big projects, indeed.
Still most coding problems are that the algorithm doesn't do what it should do (e.g. wrong loop counters, wrong break logic, wrong understanding of the requirements etc.).
Indeed, but wouldn't it be best if you could SIMPLY focus on the algorithm? Any time spent figuring out or even coding memory management is time NOT spent coding the algorithm.
> Even though my code is well designed in terms of ownership of objects, I still managed to every once in a while shoot myself into the foot and have to track down memory leaks forever or random crashes.
Each time I have such a random crash the log tells me that a deallocated object is referenced and then it is usually easy to fix. But it rarely happens if you know the rules who owns the object and you do not randomly release something you don't own. Leaks are a little harder to detect but can also be avoided if ownership is well defined. Basic rule: you only (auto)release what you create (by alloc or copy) or what you store&retain to be used after ARP cleanup.
The above applies here. It's always easier to allow the compiler to figure it out for you.
Sometimes it is also possible to set up unit tests checking -releaseCount.
> Switching to ARC made all these problems go away. No more use after free, no more keeping objects around after no one uses it anymore etc.
> You don't have to think of releasing stuff if you are in the middle of a long routine and throw an exception or call return.
It's important to remember that it is possible to screw things up with ARC since it uses the @property declaration to determine how to manage memory.
Just use autorelease before risking the exception or return...
It's not ALWAYS so simple.
> This is a major advantage of Objc2.0.
> I must admit it took me a while to get used to though. But at the end it paid off a lot.
Well, to be precise: ARC could also be done with ObjC 1.0 as far as I see. There is IMHO no special syntax for ARC. You just remove all retain/release/autorelease from the code and the compiler is clever enough to magically make it still work.
There is special syntax for ARC, though it may not be immediately obvious. The @property settings (retain, assign, strong, etc) are used by ARC to hint the compiler on how to handle the memory allocation.
So in summary, ARC alone isn't sufficiently helpful for my work to switch to ObjC 2.0 and no longer use gcc.
Certainly not ARC alone, since the obvious solution is to implement it yourself. What concerns me most is that there are features, namely blocks, which cannot be duplicated on GCC. The API is becoming MORE and MORE dependent on blocks for completion handlers. The more features which are added to ObjC (whose reference implementation we should consider as the one implemented by Clang) the further GCC will fall behind and the more we will have to introduce ugly kludges.
|
OPCFW_CODE
|
# coding: utf8
import re, config, os
from codecs import open
source_directory = os.path.join(os.path.dirname(__file__),'data\\')
dict = open(source_directory+config.Config().wordFreqList, "rb")
contents = dict.read().decode("UTF-8")
dict.close()
dict = open(source_directory+config.Config().kanjiList, "rb")
kanjiList = dict.read().decode("UTF-8")
dict.close()
def stringContainsKanji(searchTerm):
for c in searchTerm:
#Checks if codepoint of character is anything but hiragana/katakana
if ord(c) < 12353 or ord(c) > 12543:
return True
return False
def getWordFreq(hira, kanji):
matchObj = None
freq = 999999
if isinstance(kanji, list):
for k in kanji:
regex = u""+k+" "+hira+" (.*?) "
matchObj = re.search(regex, contents)
if matchObj != None and matchObj.group(1) < freq:
freq = matchObj.group(1)
else:
regex = u""+kanji+" "+hira+" (.*?) "
matchObj = re.search(regex, contents)
if matchObj != None:
freq = re.search(regex, contents).group(1)
#if matchObj == None:
# return 999999 #Just something high so the sort gets this word to the back of the list
return freq
def getRTKKeyword(kanji):
kanjiFound = False
kanjiField = ""
keywordList = []
#print("recieved kanji len: "+ str(len(kanji)))
for x in range(0, len(kanji)):
#print(str(x))
if stringContainsKanji(kanji[x]):
#print("Contained kanji")
regex = kanji[x]+u"\t(.*)"
keyword = re.search(regex, kanjiList)
#print("Keywords: "+str(keyword))
if keyword != None:
if kanjiFound:
kanjiField += "<div></div>"
kanjiField += u""+kanji[x]
kanjiField += " "+str(keyword.group(1)).replace("\r","")
kanjiFound = True
#print("Found kanji: "+kanji[x]+" = "+keyword.group(1))
#print(kanjiField)
return kanjiField
|
STACK_EDU
|
Symfony 3: How can I inject a service dynamically depending on some runtime variable
let's say I have the following interface/concrete classes:
interface EmailFormatter
class CvEmailFormatter implements EmailFormatter
class RegistrationEmailFormatter implements EmailFormatter
class LostPasswordEmailFormatter implements EmailFormatter
I then have a custom 'mailer' service that's called from my controller actions in order to send an email.
What options do I have for injecting the correct implementation of EmailFormatter to my mailer service depending on the type of email being sent?
Typical solution would be to inject a EmailFormatterFactory and then do something like: $formatter = $emailFormatterFactory->create('registration');
I would create a service that picks the right formatter during runtime, either some kind of factory or if your formatters have dependencies maybe a service were you inject the formatters from the container. Something like this:
class MailController extends AbstractController
{
private $mailer;
private $mailFormatterSelector;
public function __construct(...) { ... }
public function someAction()
{
// Do stuff ...
if (...some condition) {
$formatter = $this->mailFormatterSelector->getRegisterMailFormatter();
} else {
$formatter = $this->mailFormatterSelector->getLostPasswordEmailFormatter();
}
$mailer->sendEmail($formatter);
// Do more stuff ...
}
}
class MailFormatterSelector()
{
private $registrationFormatter;
public function __construct(EmailFormatter $registrationFormatter, ...)
{
$this->registrationFormatter = $registrationFormatter;
...
}
public function getRegisterMailFormatter(): EmailFormatter
{
return $this->registrationFormatter;
}
// ...
}
Alternatively if you have to pass the formatters into your mailer during construction, you can also create multiple, differently set up instances with different aliases and then inject them as needed into the services and controllers like this:
# config/services.yaml
mailer1:
class: MyMailler
arguments:
$formatter: '@formatter1'
mailer2:
class: MyMailler
arguments:
$formatter: '@formatter2'
MyMailController:
arguments:
$mailer: '@mailer2'
In your controller or action you can then pass in mailer1, mailer2, ... (maybe with nicer names) via.
Thanks for the ideas. The only problem I see with the first option is that injecting the formatters into MailFormatterSelector in the first solution would get clunky if there were 10 different formatters for example.
With the second solution, I don't know which formatter I need until I'm in the controller action so it cannot be defined statically.
Yes, if you have many formatters you might want to use an addFormatter() method and then use service tags and CompilerPasses to assign them or if you use a recent Symfony version use the Service Locator
Yeah thanks that's bang on, I'm using an older version of Symfony, this is a good example it seems: https://symfony.com/doc/3.1/service_container/tags.html
Yes, like this.
There is a Symfony ServiceLocator service which is the best to use for this case. https://symfony.com/doc/current/service_container/service_subscribers_locators.html
|
STACK_EXCHANGE
|
I’ve been pondering this thing. Here’s the results. Also there’s a start to the branding as you can see in the title!
I’ve been preparing a few things.. I know right, gonna be preparing for 6 years then get bored and do something else at this rate. Very close to actually doing things, promise!
This is a followup to Profit Share if you haven’t got a clue what this is about.
Being a pessimist grumpy bastard the first thing I want to look at are the problems I’ve identified and how they can be solved.
Tokens. Don’t panic! The profit share thing is still in place. The problem was Twitch use the term tokens for their mobile app currency for subs. I’m going to rename tokens to coins from now on to avoid butting heads with Twitch on that.
Cashout. This will be a good problem to have of course, but I keep picturing myself having to send a thousand tiny tiny payments every month that increase over time. For the sake of sanity the min payout will be a tenner (which rolls over each month until it hits $10 without doing the charity donation thing)
Copyright. It really feels like it’s in bad taste to require everyone to sign over their copyright, however I’ve not come up with a better way of handling this. We’re not building open source (at least not primarily and definitely not to start off) we’re building businesses and products. Businesses and products that might one day be sold, ideally without needing permission from a whole crowd of people. While I want to change this one and use a lovely hippie license we can all enjoy, I have to keep the copyright ownership thing in place for now.
Boredom. How on earth do you make a business building slash code stream entertaining enough for those that aren’t involved to stick around long enough to get involved? Then again actually if the marketing part is done well they’ll already be interested, they’re here because they’re interested.. Maybe this one’s a non-issue.
For this to work we need some motivated people to get involved. I need to figure out where they hang out and get the word out to them without spamming.
The easiest way would of course be to lean on the “I pay you to watch my stream” angle but that feels it might be a little bit too dishonest for me. It’s a maybe.
I also need to be aware that I’m a PHP guy so most of the codey websitey projects will be started using the Laravel PHP framework. At least I’ve already alienated a lot of the naysayers out there with that one so at least there’s that.
I still need to find other PHP devs, frontend folks, business savvy people, etc. Not limited to those of course, anyone that’s motivated is absolutely welcome, there’ll always be something to do.
I’m not sure if I should pre-plan the project before doing it or get it all done on stream. I’m quite aware that me scribbling notes and then crossing them out might be a bit of a bore, but then again if I explain my thinking..
You know what, all this self doubting waffling leads me into the final point.
Hit the Start Stream button lol
Almost. Almost. My main PC runs Linux so naturally things are randomly broken sometimes and a pain in the arse. This interferes with OBS and causes massive frame drops. I’m having a look at setting up a second PC that can handle the stream part. Bonus: If I get a fatal error you can see the PC rebooting instead of the stream going offline!
I gotta move the PC and NAS and fans off the desk the mic is clamped to too.. just a constant buzzing whir in the mic if the fans spin up or the NAS is doing whatever it does now and then.
I’m also relying on Virgin Media so there’s a bunch of downtime too.. unfortunately my 4G backup line doesn’t quite get a reliable enough signal to be a viable alternative either. Hopefully that one fixes itself because I don’t even know from there.
Soon. Very soon. Next week? I dunno. This month? Hopefully. Next year? Soon!
P.s Also I know, I really need to simplify the explain. Working on it.
|
OPCFW_CODE
|
Django order query by foreign key (reverse direction)
i' ve 2 simple models, like this:
class Obj(models.Model):
...
and
class Objdata(models.Model):
obj = models.ForeignKey(Obj)
...
datum = models.DateTimeField()
. My goal would be to select all Objs based on the belonging Objdata' s latest datum entry.
Maybe it' s already too complicated for django, however on sql side it' s not that very complicated to query it.
So, is there a Django way to achieve this, or how would be the best(?) to implement it. My solution is a bit complicated at the moment.
A small pseudocode might help what i want to achieve:
lst = []
for elem in Obj.objects.filter():
try:
lst.append((elem.objdata_set.all().order_by('-datum')[0].datum, elem))
except:
lst.append((elem.datum, elem))
res = [e[1] for e in sorted(lst, reverse = True)]
You'll have to explain further. What does "select all Objs based on the belonging Objdata's latest datum entry" mean? (I realize English is not your first language.) Do you perhaps mean you want to get the latest Objdata for each Obj?
You' re right about my english :- ). I updated my question with an additional pseudo code what i would like to achieve. As a result i don' t want to get Objdatas, but i want to get Objs.
If I understand your code correctly, you just want to get all Objs, sorted by their latest Objdata.datum. You can do that with aggregation:
from django.db.models import Max
objs = Obj.objects.annotate(latest_data=Max('objdata__datum')).order_by('latest_data')
Yes. This is exactly what i want. The result is slightly different from mine as it handles differently the case when obj.objdata_set.all().count == 0, but this is even better. Thanks for it!
Oh, ok, done, i didn' t know there' s this possibility, but now i can see the green tick :- ).
As an alternative, if you think Django's model query is too much or not enough to achieve what you want, or you'd prefer to SQL (however on sql side it' s not that very complicated to query it), you can use raw READ MORE HERE:
Obj.objects.raw('SELECT ... your SQL statement here;')
This will return a model instance, you can also perform custom SQL directly using connection READ HERE.
Yes, thanks, i know, just i wanted a djangonic way :- ).
@user2194805, this is perhaps the most efficient and fastest djangoic way because custom SQL avoids django's model layer. Even raw will perform better than normal Django lazy QuerySet. Many django apps (performance required) usually ends up using raw SQL.
@Anzel That's bogus. The ORM has almost no overhead compared to raw() (less than 5% for a single-row pk lookup, less than 0.5% for a more expensive lookup of multiple ordered objects). Direct use of connection.cursor() provides a performance increase at the cost of development speed and all ORM features that a lot of Django depends on. To bypass a large part of the Django framework for a negligible performance boost is certainly not pythonic imho. I simply cannot see any but the most performance-critical apps using raw sql over the ORM for any queries that the ORM can reasonably handle.
@knbk, it's a fair point, but to consider if you have multiple tables with several related fields, the overhead becomes much more expensive. Raw / custom SQL gives an edge over ORM for not to hitting the database/table multiple times, although crafting careful query may help but it certainly at the same time increases the cost of development. And remember, many developers are actually feel more comfortable to write SQL statements.
@Anzel The ORM does not hit the table/database more than raw sql, unless you completely ignore the most basic of optimizations. Multiple databases are the exception, but that's only really applicable if your site is part of a larger project. I can see where it has its uses, and even why some developers would prefer raw SQL, but that doesn't make it pythonic and I would certainly not recommend it to people who don't absolutely need it. Raw SQL also opens you up to SQL injection if you're not careful enough.
@knbk, sure but OP's asking for Django way, not Pythonic. And for the django applications I've worked with before which required optimization, I'd say > 95% ends up using raw or custom SQL at some points. I do understand where your points are coming from, I'm not arguing but this is quite honestly what I believe in.
If I understand you correctly try this:
objdates = Objdate.objects.order_by('obj_id', '-dateum').distinct('obj_id').select_related('obj')
objs = [objdate.obj for objdate in objdates]
Don't forget to put .select_related('obj') in there. Saves you a query for each Objdate instance you get.
@knbk thanks for your comment. I didn't know .select_related before but it is good. I've edited my answer.
|
STACK_EXCHANGE
|
Improve running binaries experience
- Make it possible to execute binary from the Files browsing UI, as legacy support.
- Keep the primary action (on double click and [Return]) safe.
- Ensure there is feedback that the program is running. in case it takes long to startup or doesn't have a visible window.
Display a new context menu action: "Run as a Program…".
- This is never the first action in the menu, it does not replace "Open".
- There is no keyboard shortcut for this action. Opening the context menu and choosing that action is required.
- This action is displayed for any executable binary file, even if the execute bit is not set.
- If the checkbox is ticked, the "Run" button on the top of the dialog becomes sensible.
- Clicking the "Run" button will both mark the file as trusted (and set the execute bit, if not set yet), and close the dialog and a Terminal window is opened, in which the binary is executing.
After you have run the program once, and as long as the file keeps the execute bit, the "Run as Program…" action will directly open a Terminal window in which the program is executing, bypassing the "Untrusted Program" dialog.
When a binary executable file is activated by double-click or the [Return] key, or when using the "Open" action from the context menu, the usual "Could not display “goodoldgame.bin”" dialog is displayed, but the dialog's text would explain that if you want to run it as a program, you should open the context menu and choose "Open as a Program…".
Benefits of the solution
- Gives off a legacy support vibe (by calling it "program" instead of "application", displaying a terminal window, etc.) while keeping it easy to use.
- The double click action is always safe because it doesn't allow to run the program, not even indirectly. But it doesn't leave the user helpless either, because it provides instructions to find the action they may be looking for.
- This also makes it possible to extend this feature to executable script files, whose double click action is to be used to open then in a text editor.
- As we don't know beforehand if a given binary will display its own window or not, always opening a Terminal will ensure that there is feedback that something happened/is happening in response to the user action.
- This prevents the situation where a person runs a program that expects input in a terminal emulator. Such program will keep running in the background, without any way to close it except looking for the process to kill it. Worse, you don't even know that something is running in the background, because there is no feedback, so you may try to run it again and it will not make it any better.
- Removes the need to know about and find the execute bit checkbox, instead asks the user to trust the file.
- The execute bit doesn't work as a security measure anyway, because it may already be set without the user's own consent (if extracted from archives, or if stored on a FAT filesystem).
- If the user "trusts" a file for execution, it is not necessary to ask to also check the execute bit. It is implicit in the user's intent.
- The execute bit can be unset at any time from the properties dialog anyway.
- Will display a Terminal window even for programs with a GUI window. A person might think this window is not necessary, close it, and as a consequence also close the program.
- Gnome-terminal already provides a confirmation dialog when closing a Terminal with a running program, which may help mitigate this.
- Text is boring to read. People might not notice the instructions from the "Could not display “goodoldgame.bin”" dialog.
- I purposefully did not mockup this dialog because I'm not very good with text.
- The text for the "Untrusted Program" dialog needs to be improved as well.
|
OPCFW_CODE
|
Google recently announced they will be moving Gmail to a strict DMARC (Domain-based Message Authentication, Reporting, & Conformance) Policy by June of 2016. According to the DMARC specification, incoming messages must satisfy checks for both the Domain Keys Identified Mail (DKIM) and Sender Policy Framework (SPF) validation systems. In other words, if a message uses a Gmail "From address," but did not originate on Google's own servers, it will be rejected.
Over the past three and a half years, the DMARC specification has proven valuable in preventing fraudulent emails. Thousands of companies use it to prevent abuse of their domain names within email, which protects millions of people from phishing and spoofing attacks.
Google will be the third major email provider to implement the DMARC specification. Yahoo initiated DMARC support during April of 2014 to prevent large scale abuse of Yahoo Mail. This implementation, which was very successful, will be extended to ymail.com and rocketmail.com by November 2nd — and possibly to other domains within the coming months. AOL also began to follow the DMARC specification last year in response to a large scale campaign targeting their domain.
In addition to adopting a "p=reject" DMARC policy, Google also plans to support the new ARC (Authenticated Received Chain) protocol. ARC provides a workaround for users negatively affected by DMARC changes. The protocol, which adds a cryptographically signed header in lieu of DMARC validation, has been submitted to the IETF (Internet Engineering Task Force) for consideration. ARC was also presented during the recent Malware and Mobile Anti-Abuse Working Group (M3AAWG) meeting in Atlanta, Georgia with the hopes of engaging the technical community in refining and testing it. An interoperability event has been scheduled for the first quarter of 2016 in order to evaluate ARC.
"More and more companies have been adopting DMARC and email authentication over the past few years, with more vendors and service providers adding the necessary support to their offerings in order to make that adoption simpler," according to Steven Jones, Executive Director of DMARC.org. "With new protocols like ARC emerging to address the traditional email use cases that were problematic under some DMARC policies, and the leadership of forward-thinking companies like Google, Microsoft and Yahoo, I expect to see the rate of adoption accelerate globally."
About MailerMailer's DMARC Support. MailerMailer does recommend the use of your own private domain name when sending messages with our system. However, if you must use an email covered by a DMARC policy (e.g., Yahoo, AOL, soon Gmail) as your sending address, we will substitute it for one of our own. We will then set the reply-to address as your originally selected email address.
|
OPCFW_CODE
|
❶Kendra is the girl you dream about if you like sweet young fair-skinned girls. Carmel 1.
Delia 4. Claudia 4. Ashley 0. Stephanie 4.
Reload to refresh your session. Lada 2. Chesterfield escort 4. Selena just loves to dress in stockings and heels and watch a man's reaction. Debbie 4. ELLE 1.|I am super Shush escort Ronne, patient and most of all into all sexual orientated games. Mature ladies offering sensual, erotic, happy ending massage call Our lovely star is the perfect companion for all types of bookings. Cheeky sexy fun Selena just loves to dress in stockings and heels and watch a man's reaction.
With me, you can let your worries melt away, relax, and leave the outside world! Cif, owo, cob, 69, fk, gfe, ws give ,foot esdort, role play, anal,couples,party. Say hello to the stunning jenifer, a rocket of a woman with brains and Free dating services Flong
Malcolm, Crawley member, None years | People followed by me
Hey can't wait to get started. I'll make Shush escort Ronne you Hobro hookers have a wonderful experience that you will never Up massage Vaerlose. I am alexandra and i have a very high sexual appetite you will not be sorry.
Filthy as fuck, peti, pretty, tight, slutty, new in east midlands - outcall. Hi there!]Amelia 3. Ella London Escort 0.
Blonde Enjoys One More Hardcore Anal Session Thisted Ronne
Bella is one of the most exclusive and elite domination companion in london. Melissa 0. Valentina 3. Carmel 1. Shuh Collection 1.
Sofy 3. Asian Massage London London agency.
Kimberly 2. Fabiana 1. Anastasiya 2. Pamela 1. Ariana is our new sizzling hot brunette with beautiful assets Ronns all to see! Natalia 3. Singles in cork Middelfart Amber 4.
Shush escort Ronne Big Woman Search Date Website Hot Divorced Search Amateur Couple
smoke,bigone,sweetpea,fucked,trfnthbyf,marino,escort,smitty,bigfoot,babes . ,johnson1,solitude,habibi,sushi,markiz,smoke1,rockies,catwoman ,chillicothe,heredity,elblag,rogaland,ronne,millennial,batley,overuse,bharata. Welcome to desire escorts agency-london incall / outcall escort agency-open now.
Golden hands massage Thisted
Sex in Derby. Hello. lets go gfe all the way x. June 20,9: 49 a.m. Shush escorts: a foremost escort services provider in manchester. ESCOLAR ESCOLARS ESCOPETTE ESCOPETTES ESCORT ESCORTAGE RONGGENGS RONION RONIONS RONNE RONNEL RONNELS RONNING SHURAS SHUSH SHUSHED SHUSHES SHUSHING SHUT SHUTDOWN.
Followed by Hubert
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software. Skip to content. Permalink Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software. Sign up.
Branch: master Find file Copy path. Find file Copy path. Cannot retrieve contributors at this time.
Raw Blame History. Uncommon words are better. You signed in with another tab or window.
Reload to refresh your session. You signed out in another tab or window.
|
OPCFW_CODE
|
Creating a Plan Job
Create a plan job in Resource Manager.
Creating (running) a plan job parses your Terraform configuration and converts it into an execution plan for the associated stack. The execution plan lists the sequence of specific actions planned to provision your Oracle Cloud Infrastructure resources, including actions that are expected after running an apply job. We recommend running a plan job (generating an execution plan) before running an apply job. The execution plan is handed off to the apply job, which then executes the instructions.
For configurations stored in a source code control system, such as GitHub or GitLab, the job uses the most recent commit.
Using the Console
- Open the navigation menu and click Developer Services. Under Resource Manager, click Stacks.
- Choose a compartment that you have permission to work in (on the left side of the page).
Click the name of the stack that you want.
The Stack details page opens.
- Click Plan.
In the Plan panel, fill in the fields.
Field Description Name Name of job. A default name is provided.
To configure advanced options, click Show advanced options and fill in the fields.
Field Description Upgrade provider versions
Upgrade provider versions (stack must be Terraform 0.14 and later; older stacks must be upgraded to use Terraform Registry): Retrieves the latest versions available from the configured source of Terraform providers.
Required if provider versions in the Terraform configuration changed since the last time a job was run on the stack. Dependency lock files are automatically managed for new and updated stacks. Providers are updated within the version constraints of your Terraform configuration.
Detailed log level
Detailed log level: Verbosity to use for Terraform detailed log content for this job. Default: None (no detailed log content is generated).
For more information, see Debugging Terraform.
Maximum number of parallel operations
Maximum number of parallel operations: Concurrent operations as Terraform walks the graph. Default:
Use this option to speed up the job.Note
A high value might cause resources to be throttled. For example, consider a Terraform configuration that defines hundreds of compute instances. An Apply job attempts to create as many instances as possible at the same time. In this example, a value of
100might cause throttling by the Compute service.
Refresh resource states before checking for differences
Refresh resource states before checking for differences: Fetch the latest state of stack infrastructure before running the job. Default: Enabled.
Use this option to refresh the state first. For example, consider using this option with an Apply job to be run on existing infrastructure that was manually updated.Note
Refreshing the state can affect performance. Consider disabling if the configuration includes several resources.
Tags Optionally apply tags to the job.
- Click Plan.
The plan job is created. The new job is listed under Jobs.
Using the CLI
Using the API
Use the CreateJob operation to create a plan job.
|
OPCFW_CODE
|
[ntp:hackers] smearing the leap second
stenn at ntp.org
Fri Jul 10 20:06:04 UTC 2015
Mike S writes:
> On 7/10/2015 10:09 AM, Martin Burnicki wrote:
> > Mike S wrote:
> >> Ditto, except it is an NTP issue since it only looks like a backward
> >> time step because the canonical implementation of NTP doesn't follow its
> >> own RFC, and doesn't use a monotonic timescale. If NTP did it right,
> >> there wouldn't be any issue.
> > Nope. On standard installations ntpd just passes the leap second warning
> > down to the OS kernel, so the kernel can handle the leap second at the
> > right point in time. However, *how* it is handled depends on the kernel.
> You say that as if it's true. It isn't. NTP (the implementation) uses a
> timescale which is not monotonic, similar to POSIX. (1) When a leap
> second occurs, NTP (the implementation) steps its timescale back 1
> second. Sure, it passes leap second info, then it goes on to do things
> While it claims to count seconds in an epoch (see the RFC, the current
> one started 00:00 1 Jan 1900), it doesn't - it operates in direct
> violation of its own RFC. That's because both NTP (the implementation)
> and POSIX try to do the impossible, both count epoch seconds and assume
> a fixed number of seconds in a day.
I think I disagree with your interpretation.
> It's a very fundamental flaw, trying to do the impossible.
I'll say it again: reality bites. We can declare that pi is 3 because
that makes the math easier, too.
> NTP _should_ simply focus on sync'ing epoch seconds with precision and
> accuracy. Then it has no need to handle, or even be aware of, leap
> seconds. Distributing notice of upcoming leap seconds (even though doing
> nothing internally with that info) to a host is a desirable, but
> ancillary function. The actual handling of leap seconds should be a host
> issue, just as conversion from NTP's timescale to the host's timescale
> should be.
Issues with the leap second are best solved at the lowest-level
possible. Anywhere else gets more expensive and/or provides worse data.
NTP provides a number of mechanisms to handle the leap second. These
mechanisms enable folks to implement a good number of local policy
choices about the leap second.
If you want to run your system using GPS time and the posix/right
timezones, goferit. Same for using POSIX time and letting the kernel or
NTP do its thing. Same for the leap smear.
A big problem today is that NTP is implemented using cooperating
distributed systems, and if different systems use different timescales
(GPS, smeared, backward step, a slew that starts at who-knows-when) then
there's no way to communicate that with NTPv4 or earlier.
NTF's General Timestamp API can address this, and as soon as we have
resources to make it happen it will get done. And even when that is
finished there will be a huge number of folks who will still run old
versions of NTP because they don't see any reason to change.
Harlan Stenn <stenn at ntp.org>
http://networktimefoundation.org - be a member!
More information about the hackers
|
OPCFW_CODE
|
Creating a shader for a non-directional metal material
I'm creating renderings of the products we sell for our website. All of our finishes are non-directional, meaning we use a circular palm sander on the metal. So it is essentially brushed but in a randomized, radial movement. I am new to blender but it would be very helpful if I could create these materials. The photos below are what I am shooting for. I'd love for the renderings to be as realistic as possible.
here is what I have come up with so far. Any tips for me to get it closer to yours?
A metallic material with low roughness will get you there. What is your question exactly ?
I guess I'm just so new to blender that I could use a little direction with that. I've been playing around with it but it still looks a little too much like a rendering in my opinion.
You'll need to add some textures to control the reflectivity of the metal, which can be found online for free. This one from this site is pretty decent. There's also textures.com and other places to find them. You'll want to add some to the roughness input on your material as well as maybe fiddling with the Anisotropy (the "brushed" look). Using an HDRI to light your scene will also help contribute to realism as well.
It seems to me you might be in a position of making yourself some higher quality reference material than you can easily find online. Is it possible for you to take some high quality, high resolution photographs of the material finish? Preferably some in a more complex natural environment and some with even color reflections from different angles as well as larger area or the material so low frequency patterns can also be visible.
Hello and welcome. This site is not a regular forum, the answer section is reserved exclusively for answering the OP question. To add details to your question, simply [edit] your question instead of using the answer section.
Calling those finishes "non-directional" may be an internal/marketing term you use, but IMO it's not physically accurate. The circular brush is making many anisotropic swirls. They are all directional, they just change direction such that the overall surface has no uniform direction.
As far as I can tell the realism of the material is in the texture of roughness and possibly other properties like variation in reflected color(although it's hard to tell without observing the material in real life).
I can get a bit closer to the look just by using some sort of noise:
However, as you can clearly see, it's not close to photorealism, because my quick procedural texture is not close to the one the actual material has. The variation in roughness has complex organic patterns that I cannot even see from the reference to be able to attempt to make it realistically procedurally(which is probably not very straight forward even with better reference). My belief is that if you want a good result, you need to recreate that texture with complex noise that has subtle and specific variation in detail, shapes and scratches. I think the best approach would be to experiment with a large sheet of metal with that exact finish and see what uniform single color background it could reflect in a way so that the texture is visible, photograph it and extract the texture from the photos. If that is not possible, more reference would still be needed and it might be possible to achieve better results by attempting to make the texture in image editing software like Photoshop, Gimp or Krita. You would need to observe and identify various kinds of noise patterns to do so. Frequency separation techniques might be helpful there. I think that would require a lot of experimentation and effort either way.
Oh, and I assume you would model the product to exact dimensions - I think I got the slopes of the bottom completely wrong, they are probably more shallow.
The patterns might be in part related to corrosion, so you might look into other metals like copper where they are way more clearly visible. They might be similar in texture but differ in color and contrast only.
Thanks Martynas, This is much more realistic than what I have been able to come up with. I copied your nodes and my version looks a lot different. I realize a lot of that can depend on lighting and scale and what not. What kind of lighting do you have and what type of background?
It's some HDRI from HDRI Haven. It is important for the look - you are right, because most of what you see in metal material like this is reflections, so you need something to reflect.
Is there a way I can upload a photo on here so I can show you what I have and get some feedback of what I need to do? Do I need to answer my own question so I can add a pic?
If you have an answer to your own question it's considered to be a very nice practice to post it as an answer here even if it's your own question. Then the solution you ended up with will remain for others who might be searching for an answer to a similar question in the future.
i use sharex, but theres other software. you take a photo of what you want to share, then it makes a url for you to post. the comments allow you to create links to images.
Info Link
also, anisotropy is an essential setting for some metals where the specular gets smeared. gets really cool when you plug some scratch textures into it. though might not be applicable here. link
have you tried the anisotropic property?
You can use a texture map to fine control rotation.
Example of a random scratch texture controlling anisotropic rotation.
Anisotropic reflection is clearly visible in your shared image, but it's not visible at all in the images shared by OP. Why would they try something that does not apply to the situation?..
So one thing that might be worth noting is that when I import my stl file I scale it down to .12 in order for it to fit under the camera properly. Would this be affecting my outcome of materials?
Another question, @martynas, How did you get those cross brakes in the basin. I assume it was something to do with altering the mesh, but that looks nice and my drawings didn't have those.
@MartynasŽiemys his material could very well take advantage of anisotropic for realism, just not to the degree that im using it in my photos. ive got very similar results to his using a bit of anistropic + microscratches
@ColinHildenbrand if your referring to my photos of the metal, its just the anisotropic setting in the material. then i added a black and white scratch texture to randomize the anistropic rotation.
Sorry John, I was referring to @MartynasŽiemys on how he made the cross brake lines on the bottom of the sink basin. I would like to do those on my file.
@JohnCheathem, well, if one observes any on their reference, they should use it. There might be some on the material - it's not visible in this case. Anisotropic shader is not for scratches though and your renders look nothing like the reference in the question so in this case, it should not be used and this answer may be confusing. It's good idea to get better reference(for the millionths time! @ColinHildenbrand) and look closer if some anisotropy might be there. Cross indentations at the bottom are in the geometry of the model. Play with HDRI rotation to see what gives nicer reflections.
IMO, buffing any metal with a circular palm sander, unless it's using the very softest cloth, is going to cause anisotropic regions. It seems to me that OP's photos definitely do show this, variations in anisotropic reflection angle causing a non-uniform appearance.
|
STACK_EXCHANGE
|
In my second post in this series I showed how to create a page from existing widgets and in my last post I showed how to create a custom widget. In custom widget showed how to specify i18n properties files of different locales in order to ensure that the widget labels could be rendered in different languages, however I didn't demonstrate how to localize pages nor how the two approaches work together.
How it Works Currently...
or from the FreeMarker template by calling:
You may also know that traditionally Share would pass all of the messages from a WebScript into the widgets that it instantiated by calling its '.setMessages()' function.
Finally, you should also be aware that there is a global properties files that can be used throughout Share (“common.properties” and “slingshot.properties” that can be found in '/share/WEB-INF/classes/alfresco/messages').
- “global” contains all the messages from the global properties
- “scope” is a map of widget name to messages map
The Share widgets “.setMessages()” function adds its own name as a key into the scope map and assigns all the supplied messages as an object against that key. For example, if the “Alfresco.DocumentList” widget is instantiated then “Alfresco.messages.scope['Alfresco.DocumentList']' can be used to access it’s messages.
How the Updated Approach Works...
We've ensured that the updated development approach is consistent with this old pattern and have intentionally not followed the standard Dojo pattern. The new approach uses the same “Alfresco.messages” object (although this can be reconfigured if you want to use a different root variable) and still sets the “global” and “scope” attributes.
If you create a widget with an “i18nScope” attribute then this is the scope into which the widgets encapsulated messages will be added. If no “i18nScope' attribute is defined then the messages will go into a scope called “default” (unless the widget extends another widget in which case it will inherit its “i18nScope” attribute).
The i18n properties from the WebScript that processes the JSON model will automatically be placed into a new attribute of “Alfresco.messages” called “page”.
Whenever the “.message” function is called from “Alfresco/core/Core” (see previous post) all applicable scopes are searched, e.g.
- default scope
- all inherited scopes
- widget scopes
...and the most specific result will “win”.
When creating a custom widgets there is obviously a distinction to be drawn between
- labels that never change
- variable labels that can be selected from
- completely custom labels
For example, the label for a menu item cannot realistically be included as part of the widget but an error message could be. When accepting configurable labels its worth passing them through the “.message()” function in case a message key (rather than a localized message) has been provided as if no match has found then the supplied value is returned.
This means that when constructing the JSON model for a page you could provide:
At first glance these might appear identical, but if the widget defines a message with the key “message.key” then this will “win” over any message that the WebScript might be able to resolve.
It’s also worth bearing in mind that because the widgets process locale specific properties files in exactly the same way as WebScripts it is possible to simply reference a WebScripts properties file in the “i18nRequirements” attribute of a Widget. In a future post you’ll see how this can help you to wrap existing widgets easily so that they can be mixed with the our new library of widgets.
Hopefully this post has explained how i18n properties are handled in the new approach to page and widget construction. We have made efforts to ensure that the updates are compatible with the existing messages handling and have deliberately kept with the Alfresco approach rather than adopting the standard Dojo approach to avoid creating a divide. Ultimately we've done as much as possible to ensure that Surf takes care of all of the hard work for you.
|
OPCFW_CODE
|
json describing a boolean
I have a field in a server-side object model that's a boolean. I'm writing a custom json converter and I'm wondering how best encode this for json. Should I leave it as a boolean or should I convert true to 1 and false to 0.
What's the best way to do this?
Thanks.
Why are you writing a custom JSON converter?
Because some properties are not meant to be visible to the client.
I'm not understanding why you'd want to describe a boolean as anything but a boolean, unless you're using a language that doesn't support that type.
To save space in the json string; apparently it's not such a good idea.
Alright, but then it's not about how to represent a boolean. It's about saving space. If you want to save space, then that's something you need to decide. You also need to consider whether it will make sense to the targeted languages.
...FYI, JavaScript's JSON.parse and JSON.stringify accept a "reviver/replacer" function that let you mutate the data during the processing. Perhaps your server-side language offers the same thing?
The more I think about it, the more I'll just leave it as a boolean. For now I'm the only developer on my app but it seems it'll be easier than later explaining that booleans are converted to 0/1. I upvoted everyone, thanks for your feedback.
That's definitely the safe route. Plus, if you're gzipping the files, there's a good chance that it'll make little difference anyway.
I prefer using the true/false keywords in my JSON, but 1/0 will still work. Is your question on how to write the code to create this JSON object? It depends on how you are implementing the converter. What have you got so far?
As per the RFC (§2.1), booleans are either true or false.
There is no defined best way. It depends on you (who uses this).
Persobally I think, If you are representing boolean, Use True / False , instead of 1 /0. That is more readable ( for future developers who is going to maintain this code / When you look at this code after few months).
To add to this, using True/False may make your code more robust in the future. Whatever language you use now to interpret the JSON (probably JavaScript) will treat 1 as truthy and 0 as false, but what if you expose this JSON as part of an API later? Someone may reach out to your service, parse the JSON and create an Int/Number/Whatever strongly-typed, truthy object for 0, then have to work around this limitation in an illogical way. Even if there are no plans to ever do this, it's usually beneficial to reduce the semantic gap between what you want and what you mean to as little as possible.
I recommend using true and false as it makes it more clear that the values are boolean.
|
STACK_EXCHANGE
|
require 'projectEuler'
class Problem_0348
def title; 'Sum of a square and a cube' end
def difficulty; 25 end
# Many numbers can be expressed as the sum of a square and a cube. Some of
# them in more than one way.
#
# Consider the palindromic numbers that can be expressed as the sum of a
# square and a cube, both greater than 1, in exactly 4 different ways. For
# example, 5229225 is a palindromic number and it can be expressed in ex-
# actly 4 different ways:
#
# 2285^2 + 20^3
# 2223^2 + 66^3
# 1810^2 + 125^3
# 1197^2 + 156^3
#
# Find the sum of the five smallest such palindromic numbers.
def sols( n, sq, cb )
count = 0
# Subtract cubes until they're too big to produce a positive difference.
cb.each do |c|
break if c > n - 1
count += 1 if sq.has_key?( n - c )
end
count
end
def gen_pals( len )
# Generate an array of all palindromic numbers having exactly len digits.
# Use brute force string manipulation.
front = (len - 1) / 2
back = len - front - 1
low = 10**front
high = 10*low - 1
pals = []
(low..high).each do |i|
s = i.to_s
pals << (s + s[0, back].reverse).to_i
end
pals
end
def solve( n = 5 )
# Nothing sneaky here... Pre-compute a bunch of squares/cubes, then step
# through the palindromic numbers. (We generate them methodically to have
# more and more digits, as needed.)
found = []
len = 0
sq, cb = {}, []
(1..50_000).each {|i| sq[i*i] = i; cb << i*i*i}
while n > found.size
len += 1
pals = gen_pals( len )
pals.each do |p|
found << p if 4 == sols( p, sq, cb )
break unless n > found.size
end
end
found[0, n].reduce( :+ )
end
def solution; 'MTAwNDE5NTA2MQ==' end
def best_time; 10.94 end
def effort; 25 end
def completed_on; '2016-08-17' end
def ordinality; 1_781 end
def population; 623_000 end
def refs
['https://www.emis.de/journals/GM/vol12nr1/andrica/andrica.pdf',
'https://oeis.org/A171385']
end
end
|
STACK_EDU
|
Enable Oracle API to send logs to OpenSearch
Following same changes to Backend API, update the Oracle API service allowing it to send logs to OpenSearch.
Backend API PR: https://github.com/bcgov/nr-spar/pull/202
Backend API Issue: https://github.com/bcgov/nr-spar/issues/203
Stealing this one for me :ninja: 🦹
Moving to blocked for 2 reasons:
We need to talk with OneTeam about how to make one instance of the fluentbit deal with the logs of both the Backend API and the Oracle API.
I can't connect to BC Gov's VPN due an certificate issue, I'm already talking with Encora's IT team to solve this.
For now, all the changes are pushed to the working branch, they can be checked here: https://github.com/bcgov/nr-spar/commit/7e5d2b40f40f57a04d36307431f6a971efc07c42
@mbystedt @andrwils would either of you be able to help Matheus with the issue described above? Matheus can provide any additional info that you need
We need to talk with OneTeam about how to make one instance of the fluentbit deal with the logs of both the Backend API and the Oracle API.
I think we're missing some information about the ask here. Feel free to setup a meeting. One instance of Fluentbit can easily handle quite a number of different sources of logs. There's no difference between having a single input source or 10. The only thing that becomes complicated is tags. We came up with a couple strategies for keeping tags straight in Funbucks for the on-premise that we can share.
Sorry, that was already solved, I forgot to update the ticket... The situation right now is that we can't see the generated logs on opensearch, we just need to get some directions on how to make it work! You can check the changes we made on this PR, which is also linked to this issue. Thanks!
@mgaseta Well, now I know who's responsible for the 8000+ errors triggering our alerts. This change should be rolled back until the Fluentbit configuration is fixed.
I can take a look at the output from FluentBit. But, it'll be much quicker if you just use the public docker image (artifacts.developer.gov.bc.ca/cc20-gen-docker-local/nr-funbucks:latest) that can be used to test your output. Unless it's exceedingly obvious, that's what I will be doing, anyway. Docs are here: https://bcdevops.github.io/nr-apm-stack/#/testing
I'll update this with the docker command. podman run -d -p 3000:3000 artifacts.developer.gov.bc.ca/cc20-gen-docker-local/nr-funbucks:latest
Since it's not a Funbucks template, I have no way to generate a local FluentBit setup. You're on your own with the configuration unless you can provide that local equivalent. Your pipeline to getting new logs to production should have a way to generate a local FluentBit setup. Otherwise, you'll be wasting a lot of time copying and pasting and modifying things manually.
Hi @mbystedt so sorry about all of those errors. We forgot to delete the PR deployment, since that was only a preview. Now is gone.
There should be a Funbucks template, though. I'll push a commit and open a PR on Funbucks repo. Very likely we'll need some help to do those changes, but I think we can use the PR comments section :)
Once we have Funbucks in place, we can work with nr-apm-stack-lambda.
Thanks for the help, as always.
cc: @mgaseta @SLDonnelly
@RMCampos Excellent. Once there's that template PR in Funbucks, if you give us a sample then we'll be able to have a better idea what is the problem. The errors are about it not having an index. So, it's likely a fingerprint issue.
Let us know if that lambda container works for you. We moved all the integration docs to https://bcdevops.github.io/nr-apm-stack. Welcome any thoughts you have. Feel free to do a PR if you want to. The files in the docs folder of the [https://github.com/BCDevOps/nr-apm-stack/tree/main/docs](apm-stack repo).
|
GITHUB_ARCHIVE
|
Hello dear gurus,
I am doing an analysis with 8 runs (rest, task1, task2 etc…) and when I use afni_proc.py, it concatenates all the runs into one errts dataset. However, I need them independently for my further analyses. I can use 3dbucket to seperate them but subjects differ in respect to tasks (some subjects have all the tasks, some of them only did resting state, some of them did 4 tasks and ghosted etc.). And if I use an if-else loop for the job, there would be 2^8=256 permutations, that would take way too much time and probably it would be buggy. Is there a way to get seperate datasets from afni_proc.py output?
Typically, one puts a set of EPI dsets into a single afni_proc.py command that makes sense to go together.
If you have runs that are very different—particularly task and rest, or unrelated tasks (a categorization which includes task and rest…)—then I would not put them all together into a single afni_proc.py command. I don’t see what that would mean— GLTs would be hard to specify for the task part, and typically task and resting FMRI have different processing paths (e.g., one includes the derivatives of motion params in rest often, but not in task).
I don’t understand the 2^8 permutation estimate.
You should likely just process each set of similar tasks together in batches. If nothing “goes together”, then you would have N afni_proc.py runs.
Cases like this are why we recommend using @SSwarper for skullstripping+nonlinear warping before running afni_proc.py: you only run that once, and then pass its results into each afni_proc.py instance. Same goes for running FreeSurfer: you run it on your subject’s anatomical once, and then you can use it in each subsequent AP run.
I see. I am using errts files for processing (not doing GLTs) and did the @SSwarper first, so I feed those files to afni_proc. The issue is there are too many subjects and too many runs so using a different afni_proc.py for every run of every subject would mean too much work. If I can’t find a way around though, I will use that method.
2^8 is, not important at this point but, suppose you have run 1, 2, …, 8. You have two states of first run, it exists and it doesn’t exist. Same for second one, third one, so if everything exist, you have 1 1 1 1 1 1 1, the last one doesn’t exist: 1 1 1 1 1 1 0; second and fifth one doesn’t exist: 1 0 1 1 0 1 1 1 etc. Two possibilities for every run: 2x2x2x2x2x2x2x2. I’m not sure the exact mathematical word for this is permutation or not (haven’t seen maths in the last 10 or so years) but this is what I meant.
Best and thanks for the always fast and detailed reply,
|
OPCFW_CODE
|
It's a commonly-known fact that we only use a fraction of the capacity of our brains. But every so often, someone comes along who seems to get something that the rest of the world can't. These are the people that start civil rights movements and make milestone scientific discoveries, who change the face of art, whose names echo around the world. Often, these people are seen as eccentric, radical, and occasionally even crazy. Crazy, but brilliant. So, are they really created with superior knowledge, or did they simply decide to toe the line and see what would happen?
In my not-so-humble opinion, progress cannot be made unless someone branches off into the unknown. We're curious creatures, but social and moral taboos restrict us to certain perimeters of knowledge. Our functioning brain capacity is restricted enough; why on Earth shouldn't we use everything we've got?! The answer is that people are scared. Scared to break the social "rules" and have assumptions made about them. But the organizations or governments who establish these often unspoken rules are scared, too. The only reason to restrict education on a certain topic is because people might disagree with the accepted social standings. They might even be able to prove them wrong.
Ever think about that? Because I do. I spend an enormous amount of my time thinking about that, and researching all the things that people whisper about but can't openly say. People are usually rather shocked. You were reading an article on what? Are you some kind of (fill in the blank with a controversial trait i.e. lesbian, stoner, emo, sex addict, etc.)? No. Just because I want to be educated about things that no one seems to properly educate themselves about doesn't mean that I practice those things. Reading a book in which the main character self-harms isn't going to make me slash up my arms. Watching an educational video about a drug isn't going to make me want to get twisted all day every day. What is there to fear from education? And yet, our society condemns it. Often we only know negative results of things without understanding why or how or anything other than the fact that it's "wrong" or "bad."
Well, who says? Who says we shouldn't explore? Do you think you'd be reading this today if Benjamin Franklin hadn't thrown his kite out in a storm and discovered electricity? Do you think people would live in all parts of the world if Copernicus hadn't hypothesized that the Earth is round? Do you think civil rights would exist to the extent that they do if Martin Luther King Jr. had decided that he'd better just play it safe and accept inequality? The world is where it is today because people defied the norm. We cannot learn anything if we refuse to explore that which is untouched. Even if a particular tabooed substance or practice doesn't contribute to society, some tweaking and deeper research can lead to all new, exciting, life-changing discoveries. There is no harm in education unless someone's got something to hide.
If you're human, you probably have some thoughts locked up in that brain of yours that you'd never dare speak aloud. Even if you're not going to let them leave the confines on your mind, my opinion is that those untouchable questions are the ones most worth answering. No one can arrest you for thinking something because, as of now, we can't read minds. So what is the harm in exploring what's nagging your subconscious? Maybe you know something isn't quite right in your life but you don't want to turn over rocks. I encourage you to do just that. You never know. You could be the next "crazy" person who changes the world.
|
OPCFW_CODE
|
4 In 1 Herbal Penis ...
Using MSN account in different devices is more a necessity at present. This mailing platform provided by Microsoft is leading at the top. With easy and effective features, it helps in proper communication and data storage services too. MSN Support Number is available for users who are in need of attaining support for any issues they land up with in their mailing platform.
How to contact MSN Customer care Number for Technical Issues?
As MSN has not giving any official number, so users can go to their official site for contacting the team. Users can then choose the criteria of choosing questions as per their issues. They are hence provided with the step by step solution for any queries they have.
Users can also directly contact with the experts by dialing MSN contact number, which is a third party technical support number and can easily discuss their issues with experts directly.
Why you need to dial MSN technical support number?
There are certain issues in MSN such like configuration of issues, deleting junk mails, problem in sending and receiving mail, attachment related issues and so much more. All these issues can be easily eradicated, if users dial toll free MSN Phone Number and get in touch with the experts for instant help. The helpline number is available 24/7 and 365 days.
SOME COMMON PROBLEMS YOU MIGHT BE PRONE TOWARDS MSN:
• MSN phishing and hacking problems
• Privacy related concern
• Forgotten password issues
• Not able to send or receive mails
• Sign up issues in MSN
• Spam mail related issues
• Compromised MSN account hindrances
• Unable to delete MSN account
• Installing MSN app in android devices is difficult
• Unable to attach files
The problems that MSN users face are many. But with the immediate help from experts all these technical complexities can be easily removed. You can take help from the technicians of MSN by dialing MSN Customer Service number for help.
MSN Password and Account Recovery steps:
• Go to the MSN log in page.
• Click on the "Reset Password" link button.
• Choose the appropriate reason for your password recovery.
• Enter your recovery email ID in the given field.
• Enter the captcha verification characters given in the screen.
• Choose the recovery option as mail or phone number.
• Enter that code that you have received in your recovery mail or phone.
• Now it will prompt you to new screen where you will be able to create a new password.
MSN Account reset and Password Reset Steps:
• Log into your MSN account.
• Go to the account settings page.
• Find and click on the "change Password" link button.
• Now enter your current password.
• Than enter the new password in the given field and re-enter it again.
• Click "Save" to save the changes.
DIAL CONTACT NUMBER FOR MSN SUPPORT
Users may have professionals of MSN directly through third party MSN technical support number 1-888-958-7518. If any issue or problem persists, users will be able to attain complete solution for the same through the MSN technical support services.
All complex to normal issues and hindrances are being removed by MSN professionals directly in less time period. If you are also facing any of these errors, you can take help from MSN Support by experts. You can visit for online tech support at https://www.globalpccure.com/Support/Support-For-MSN.aspx.
November 18, 2018 7:59 am
December 31, 9999
November 18, 2018
|
OPCFW_CODE
|
Extension not decompiling nuget packages right SqlCommand & SqlReader
Environment data
dotnet --info output:
.NET SDK:
Version: 7.0.110
Commit: ba920f88ac
Runtime Environment:
OS Name: fedora
OS Version: 38
OS Platform: Linux
RID: fedora.38-x64
Base Path: /usr/lib64/dotnet/sdk/7.0.110/
Host:
Version: 7.0.10
Architecture: x64
Commit: a6dbb800a4
.NET SDKs installed:
7.0.110 [/usr/lib64/dotnet/sdk]
.NET runtimes installed:
Microsoft.AspNetCore.App 7.0.10 [/usr/lib64/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 7.0.10 [/usr/lib64/dotnet/shared/Microsoft.NETCore.App]
Other architectures found:
None
Environment variables:
DOTNET_ROOT [/usr/lib64/dotnet]
global.json file:
Not found
VS Code version: 1.82.2
C# Extension version: v2.1.2
Steps to reproduce
Install this package https://www.nuget.org/packages/Microsoft.Data.SqlClient/5.1.1
Write this code
using Microsoft.Data.SqlClient;
var builder = new SqlConnectionStringBuilder
{
DataSource = "localhost",
UserID = "sa",
Password = "",
InitialCatalog = ""
};
try
{
using var connection = new SqlConnection(builder.ConnectionString);
Console.WriteLine("\nQuery data example:");
Console.WriteLine("=========================================\n");
connection.Open();
var sql = "SELECT name, age FROM People";
using var command = new SqlCommand(sql, connection);
using var reader = command.ExecuteReader();
while (reader.Read())
{
Console.WriteLine("{0} {1}", reader.GetString(0), reader.GetInt32(1));
}
}
catch (SqlException e)
{
Console.WriteLine(e);
}
Console.WriteLine("\nDone. Press enter.");
Ctrl + Click the ExecuteReader method
Expected behavior
Decompiled library successfully with intellisense working
Actual behavior
Notice there is no type inference for SqlDataReader and its return type is null
Additional context
When getting intellisense for reader, the list is not complete. In this case is missing the Dispose() method.
Looking at SqlClient source code for v5.1.1 the method is defined without a [EditorBrowsable(EditorBrowsableState.Never)] data annotation so it should be showed.
Also tested with VS Code C# Extension v1.26.0, it works but the decompiling is different compared to the source.
When getting intellisense for reader, the list is not complete. In this case is missing the Dispose() method.
The method linked is protected so it doesn't show up in intellisense. The actual Dispose method it would bind to is inSystem.Data.Common.DbDataReader which does have the attribute.
So this is by design.
Decompiled library successfully with intellisense working
Not sure exactly what is happening here, but its likely an issue in ICSharpCode.Decompiler which we use to decompile. I've noticed the same issue reproduces in VS as well. There's probably not much we can do on the extension side.
The method linked is protected so it doesn't show up in intellisense. The actual Dispose method it would bind to is inSystem.Data.Common.DbDataReader which does have the attribute.
@dibarbet thanks for pointing me out. Now I see the attribute but I'm curious why should not be shown in intellisense?
I think we may remove the [EditorBrowsable(EditorBrowsableState.Never)] data annotation.
I see ILSpy is also unable to decompile this, so almost certainly a bug or limitation on their end.
So perhaps we may create an issue in order to them can look into?
|
GITHUB_ARCHIVE
|
In an open-circuited transmission line (microwave circuits) the current in the return path is completely zero?
Since the current meets the open boundary and gets reflected back does this mean that the return path has zero current always on it? Or there is also a standing wave there as well with a current with an opposite direction of propagation?
The same I would ask about a lambda/2 dipole antenna which is a standing wave line as well.
What do you mean by "return path"?
Are you asking what happens at the point of the open boundary condition? Or are you asking what happens along the whole length of the line leading to the boundary?
The return path I mean the second wire or metal plate. For a coaxial it would be the outer wire. For a waveguide of parallel plates would be the bottom plate.
@bigboss no, very much not.
@bobbs I don't get what you mean with no.
A transmission line doesn't have a "supply path" and a "return path". The two conductors that make up a TL are coupled throughout their length so that they always have opposite currents — this is what makes it a transmission line, and it's true even if you're "only driving one side", with the other referenced to ground.
The relationship between voltage and current in a transmission line changes along its length; if one end is an open circuit then there is zero current there, but a short distance away the current is nonzero (in a sense, the current at that point sees the remaining TL stub as a capacitor), and a quarter wavelength away the current reaches a maximum.
So... In my case in the second conductor we would have a standing wave across the conductor with opposite orientation?
@bigboss yes, if I understand you correctly.
You're correct that, at the open end of the transmission line or antenna, the current is zero. At other points on the line, there is a nonzero current; it's at a maximum 1/4 wavelength from the open end. In your dipole, that's 1/4 wavelength from each end, putting it at the center feed point. The voltage is maximum at the open end(s) and minimum at the feed point, but nonzero based on the characteristic impedance of the antenna.
What I ask is what happens on the second "wire" or "metal plate" of the transmission line in open circuit conditions. The current cannot form a loop obviously. Therefore, do we have a standing wave of opposite orientation than on the first wire (or hot call it whatever)? Or completely zero?
@bigboss, remember there is capacitance between the "primary" conductor and the reference conductor, so there is a loop. (Also, your argument applies equally to both conductors. If the open termination blocked all current in the transmission line there couldn't be current in either conductor)
@bigboss the standing wave is the loop. There's no path at DC, but RF is a different story.
@The Photon this was a pretty good explanation thanks!
|
STACK_EXCHANGE
|
from django.http import HttpResponseRedirect
from django.urls import reverse
from peytalaneApp.models_dir.user import User
from django.http import HttpResponse
def IsLogin(function):
"""
Décorateur qui verifie que l'utilisateur est connecté.
Passe en paramètre de la vue differentes infos sur l'utilisateur dans cet ordre:
* si la lan a déjà été reservé
* la liste des aliments déjà payés par l'utilisateur
* la liste des tournois déjà payés par l'utilisateur
* si l'utilisateur est admin
* le total en euro du panier (penser à le recalculer dans la vue après l'ajout d'un élément dans le panier
Utilisation :
```
@IsLogin
def get(self, request,lan_is_reserved,have_foods,have_tournament,is_admin,total, *args, **kwargs):
```
"""
def wrap(self,request, *args, **kwargs):
print(request.user)
if request.user.is_authenticated:
user = request.user
#it's ugly but it permits to bypass *args functionment
if user.lan:
lan_is_reserved = 1
else:
lan_is_reserved = 0
is_admin = user.admin
transactions_list = request.session['transactions']
total = sum(transactions_list[key]['price'] for key in transactions_list)
have_foods = user.payment_set.filter(type_product = "food") #[elem.food for elem in user.Payment_set.all()]
have_tournament = user.payment_set.filter(type_product = "tournament") #[elem.food for elem in user.Payment_set.all()]
return function(self,request,lan_is_reserved,have_foods,have_tournament,is_admin,total, *args, **kwargs)
else:
return HttpResponseRedirect(reverse('login'))
wrap.__doc__ = function.__doc__
wrap.__name__ = function.__name__
return wrap
"""
Décorateur qui verifie que l'utilisateur est admin.
Passe les même paramètres que le décorateur IsLogin à savoir
* si la lan a déjà été reservé
* la liste des aliments déjà payés par l'utilisateur
* la liste des tournois déjà payés par l'utilisateur
* si l'utilisateur est admin (toujours égale à True ici)
* le total en euro du panier (penser à le recalculer dans la vue après l'ajout d'un élément dans le panier
"""
def IsAdmin(function):
def wrap(self, request,lan_is_reserved,have_foods,have_tournament,is_admin,total, *args, **kwargs):
if is_admin:
return function(self,request,lan_is_reserved,have_foods,have_tournament,is_admin,total, *args, **kwargs)
else:
return HttpResponse('Forbidden', status=403)
wrap.__doc__ = function.__doc__
wrap.__name__ = function.__name__
return wrap
|
STACK_EDU
|
I'm using indy components with D2007 and try to list subject of messages from a imap mailbox. I downloaded and installed current indy new version 10.6.0.5039 (installing x100 packages) and tried with various openssl dll versions (32bit on xp machine, copied both in system32 dir and in my app dir) but always got "could not load ssl library" error. Could someone tell me the right indy dcl package and openssl dll to use with D2007? Using function WhichFailedToLoad i get the result: "SSL_CTX_set_info_callback_indy X509_STORE_CTX_get_app_data_indy X509_get_notBefore_indy X509_get_notAfter_indy SSL_SESSION_get_id_indy SSL_SESSION_get_id_ctx_indy SSL_CTX_get_version_indy SSL_CTX_set_options_indy des_set_odd_parity des_set_key des_ecb_encrypt"
Modified 9 years, 7 months ago
Viewed 5k times
Add a comment |
WhichFailedToLoad() function in the
IdSSLOpenSSLHeaders unit tells you why OpenSSL could not be loaded.
The latest snapshot of Indy 10 uses the latest version of OpenSSL. There are OpenSSL DLLs available for download from Indy's Fulgan mirror:
Remy thanks for your reply, from function WhichFailedToLoad i get a lot of error values, and i still need response to my 2 questions: Aug 13, 2013 at 7:49
Remy thanks for your reply, from function WhichFailedToLoad i get a lot of error values, and i still need response to my 2 questions: with my d2007 i compiled and installed indysystem100 dclindycore100 dclindyprotocols100 from fulgan indy10_5039.zip and i copy ssleay32.dll and libssl32.dll from fulgan openssl-1.0.1e-i386-win32.zip that seems to be latest versions for indy e for ssl. Component TIdIMAP4 has utUseImplicitTLS option and component TIdSSLIOHandlerSocketOpenSSL has sslvTLSv1 method. Aug 13, 2013 at 8:05
You should not be getting "a lot of error values". If the DLLs cannot be loaded at all, only the filenames should be reported. If the DLLs are loaded, then only missing functions should be listed. If you are getting "a lot" of missing functions, then something is seriously wrong. Please update your question with the actual output from
WhichFailedToLoad(). Aug 13, 2013 at 9:26
Actual output from WhichFailedToLoad is "SSL_CTX_set_info_callback_indy X509_STORE_CTX_get_app_data_indy X509_get_notBefore_indy X509_get_notAfter_indy SSL_SESSION_get_id_indy SSL_SESSION_get_id_ctx_indy SSL_CTX_get_version_indy SSL_CTX_set_options_indy des_set_odd_parity des_set_key des_ecb_encrypt" Aug 13, 2013 at 9:42
The fact that there are
"..._indy"functions listed tells me that you are trying to use newer OpenSSL DLLs with an OLD version of Indy. Early versions of Indy had to use custom-built OpenSSL DLLs that exported custom
"..._indy"functions to access private OpenSSL data that has since been publically exposed in later OpenSSL versions. Modern Indy releases (especially 10.6.0) use standard OpenSSL DLLs. So this tells me that your app is NOT using Indy 10.6, like you claim, but is in fact using Indy 9 or earlier. Those custom Indy OpenSSL DLLs are available in Fulgan's
SSL\Archivefolder. Aug 13, 2013 at 17:10
|
OPCFW_CODE
|
Make your life easy!
ALL-IN-ONE Windows Server disk management toolkit
Time Limited Offer - 20% OFF
30-day Money Back Guarantee
We need to change partition size on our Windows Server when the current partition size cannot meet our needs to maximize computer performance. Taking easy operation and data security into consideration, an easy and safe solution to change partition size without losing any data is in demand. This article is aimed at helping server users change partition size on Windows server 2003, which also applies to Windows Server 2000 and 2008.
Windows Server 2003 (also referred to as Win2K3) is a server Operating System produced by Microsoft. It is considered by Microsoft to be the cornerstone of its Windows Server System line among business server products. According to Microsoft, Windows Server 2003 is more scalable and delivers better performance than its predecessor, Windows 2000. However, it can not do everything that more popular software can. Nowadays, so many people are willing to rely on software to change partition size of Windows Server 2003 in a fast and safe way.
All the size of partitions on Windows Server is carefully allocated while building the Server. However, things are changing and the scheduled size may not always meet your needs, especially for the System partition, as Windows continues to download large updates or any other reasons. If the System partition is running out of space and struggles for precious unallocated space, you are unable to install new programs, even the whole performance of your computer will be declined, say, the computer goes slow. Then you need to change the system partition size on your Windows Server 2003.
Windows Server 2008 users can change the partition size by the Disk Management build-in Windows Server 2008. However, as a Windows Server 2000/2003 user, you can only turn to a third party server partition manager software to accomplish the job. Someone prefers to EaseUS Partition Master Server Edition which helps even non-professional users to change partition size on Windows Server 2003 in a professional way and without losing any data. And EaseUS Partition Master Unlimited Edition allows unlimited usage within one company if you need use it on multiple machines. You can reclaim wasted disk space, organize your data, and speed up file system performance. Besides, EaseUS Partition Master is absolutely user-friendly. With this easy-to-use partition resizer, even your grandma can change partition size for you.
To change partition size on Windows Server without data loss with EaseUS Partition Master Server Edition, first of all, you should log as Administrator and launch the software; then use the resize utility to change partition size, just select one partition on Windows server 2003 you want to change, and then click Partitions > Resize/Move partition. The current size of all the partitions is displayed on the disk diagram. The minimum and maximum partition sizes you can change on Windows server 2003 depend on the free space within and surrounding the partition. The size will be changed after you reboot your system.
Besides the function to change partition size on Windows server, you can enjoy more features from EaseUS Partition Master Server Edition for server computer partition management, including: Copy Partition, Copy Disk, Create and Delete partitions, Format partitions, Convert Partitions, Explore Partitions, etc.
Other partition management options are also avaliable in this partition magic software:
To learn more about EaseUS Partition Master Server
Note: when you manage your disks and partitions, your Server is at greatest risk of data loss. To ensure data security, we suggest you download a backup software for your Windows Server 2000, 2003, 2008.
|
OPCFW_CODE
|
The wonderful world of SAT solvers
Imagine a circuit designer wants to verify the accuracy of a specific computation before it goes into production, and imagine a college trying to find the best way to schedule all their exams. On the surface, the problems seem pretty different, but the two are both instances of the same problem: SAT, or Boolean satisfiability.
Look at this statement: (A or B) and (not A or B). Determining which inputs make this statement true would be a SAT problem. Trivially, the given statement depends only on B being true, as A being either true or false satisfies only one of the statements, while B being true satisfies both. However, when playing around with an arbitrary number of inputs, the problem's difficulty becomes increasingly apparent. The problem is so hard it’s been proven NP-complete by the Cook-Levin Theorem.
In computer science, NP-complete problems are considered the hardest problems with quickly verifiable solutions. However, just because the solutions are quickly verifiable doesn’t necessarily mean the problem itself can be solved quickly. This is the essence of the P versus NP problem, one of the biggest unsolved problems in the field of computer science. If P doesn’t equal NP, then there is no polynomial algorithm that can solve all instances of an NP-complete problem.
Before discussing algorithmic advancement, let's look at the formalism and structures hidden within this problem. The variables in the logical equation in the image above—X, Y, and Z—are referred to as “literals.” The statements between “or” and “or not” are its “clauses.” The input, taken as a whole, is said to be the “conjunctive normal form” of a Boolean statement.
Namely, to start solving SAT, a value is assumed for one of the variables, and this process continues until all variables are fully defined. At this point, we check to see if the resulting expression is true or false. If it's neither, we go back to our most recently defined variable and change it, going through all assignments. Although this type of brute-force search, called depth-first search, is ill-advised, it creates a more friendly visual aid.
Implicitly, this process outlines a binary tree, shown above. Left-handed “branches” indicate a variable as false, while right-handed “branches” indicate a variable as true. This doubles the number of branches at each level, and, since a level exists for each variable, the complexity is exponential.
Let’s optimize this algorithm. The first optimization to this algorithm is the “heuristic search.” The main idea of a heuristic search is to focus on the area most likely to provide the most bang-for-the-buck. In this case, whenever a literal is alone, it must be true. Also, whenever all but one of the literals are defined in a clause, the value of the remaining variable can be assumed by noting the assignments that would make the clause true.
It’s prudent to look out for these freebies in order to deduce the actual value of some literals in order to reduce the amount of guesswork. Computer scientists call this process "unit propagation." Another optimization to our algorithm would be to order the assignment of literals by how frequently they show up in the conjunctive form, i.e., if you see more X’s than Y’s, assign values to X’s first. Ultimately, this reduces the number of decisions required by our algorithm.
One issue we haven’t tackled yet is the cost of constantly iterating the conjunctive form, which can be thousands if not millions of clauses long. To reduce this cost, SAT solvers typically only keep track of two literals, called watched literals, as unit propagation is only possible if all but one variable remains. Thus, whenever unit propagation occurs, a watched literal is exchanged whenever possible. Formally, this class optimization defines the subclass of SAT solvers called “VSIDS,” popularized by the zChaff SAT Solver.
Trying to optimize beyond this point becomes increasingly difficult. The trick to understanding the additional optimization levels is to use our failures to increase our chance of success by searching non-sequentially from within our algorithm. In our search for a solution, it’s likely that specific configurations of literals will almost always lead to contradictions.
How can we use these contradictions to help us? By examining the contradiction, identifying which literals were involved, and tracing the contradiction to its roots using our tree diagram, we should be able to define these contradictions. It would be nice not to repeat such contradictions, but how do we do this?
This is actually much simpler than it first appears when considering the law of contrapositives. To illustrate, assume I always cry when receiving a failing grade. If I had not cried on a particular day, I could not have received a failing grade. These contradictions can thus add clauses to our original problem through their contrapositive forms.
You might be thinking that this seems like we’re increasing the size of our problem, not decreasing it. This is true, but this maneuver doesn’t actually increase our search space as these variables were already in the original problem. The benefit to these clauses is that they allow for more unit propagations, telling us whether a future guess is reasonable or unreasonable, paradoxically decreasing the search space. Recalling the tree diagram, this technique allows us to skip large portions of the tree in a process called non-chronological backtracking.
With this approach, however, the conjunctive form becomes littered with excess clauses. Thus, our algorithm needs to simplify and combine clauses when possible, a process called clause minimization. However, minimization can only go so far, and a problem-solver is forced to keep a few clauses. Luckily, there are some heuristics for evaluating which clauses to hold onto, but this is still an active area of research. Taken together, these techniques are examples of clause maintenance.
As a SAT problem is solved, some parts of the search space (our tree diagram) are better explored than others. Thus, discovering conflicts implicitly saves facts about the local context of the problem. These facts, as well as the last attempted solution, are beneficial because they allows the solving algorithm to resume work in a previously explored section of the search space. Storing this information is called the phase-saving heuristic. These additional optimizations define the class of SAT solvers called conflict-driven clause learning SAT solvers from which almost all modern SAT solvers derive.
As almost all computer science students are familiar with, randomization can be a tremendous asset to ensure optimal performance on a problem. There could be someone evil making our algorithm look bad by forcing inputs to take a certain, nonoptimal form, or, more realistically, just bad luck, that makes an algorithm perform suboptimally. Resolving this issue requires restarting the search somewhat periodically. This ensures that the contradictions and unit propagations the algorithm finds cover many clauses and literals within our tree. This allows our SAT solvers to consider what general trends are true while reducing the chances of getting stuck within an unlucky portion of the search.
Practically speaking, the implementation of this algorithm is an issue unto itself. To make the best SAT solver, one has to efficiently store the clauses, organize the search, encode and process the original problem to aid the SAT solver’s heuristics, and find a way to parallelize the search. All of these requirements are nontrivial to satisfy.
There is an extension to SAT solvers called Satisfiability Modulo Theories. SMTs aim to apply the reasoning ability of SAT solvers to more general problems in math and logic. For example, given two functions’ values, rules of composition, and properties, it can be determined whether their inputs will equal each other. However, this type of extension to SAT solvers requires us to program the rules and associated logic of the mathematical objects in question. We then need to build another application to orchestrate these systems of logic, called theories.
Not only can SAT solvers find optimal configurations, but they can also prove if such configurations are non-existent. SAT solvers can disprove statements by exhausting a search for a satisfying arrangement, or by noting the realization of unavoidable conflicts. Such a disproof is known as an UNSAT proof.
SAT, despite these advancements, is still an NP-complete problem and therefore takes exponential time to compute. However, there seems to be a hidden structure in SAT problems relevant to humans that separates them from randomly generated SAT instances. Therefore, we see that algorithms used to solve human-relevant problems perform exceptionally well when scaled to problems with millions of clauses and variables. The evolution of SAT solvers shows two quite remarkable things: Humans will find a way to overcomplicate anything, even something as simple as brute force search, and algorithms are very general tools, much simpler than one may fear, that are impactful even in areas the original designer had no understanding of.
|
OPCFW_CODE
|
BindException: No Such Object when upgrading to v3
Laravel Version: 5.4
Adldap2-Laravel Version: 3.0
PHP Version: 5.6
Description:
I've taken two routes to get to Adldap2-laravel v3:
Upgrade
Clean install
Both times I followed the instructions on the site. I'm coming from a working v2 set up. However, no matter what, I get the following error in tinker:
>>> Auth::attempt(['username' => '<username>', 'password' => '<password>'])
PHP warning: ldap_bind(): Unable to bind to server: No such object in /var/home/codydh_local/Development/app/vendor/adldap2/adldap2/src/Connections/Ldap.php on line 262
Adldap\Auth\BindException with message 'No such object
or the following error on the web:
BindException in Guard.php line 80: No such object
Notably, I am using OpenLDAP, but as I said, this configuration works perfectly with v2. Any ideas?
Notably, I am using OpenLDAP, but as I said, this configuration works perfectly with v2. Any ideas?
There were significant configuration changes from v2 to v3, did you re-publish the Adldap2 config files?
I took two approaches:
Upgrading the existing installation, keeping adldap.php, and publishing a new adldap_auth.php with the correct settings based upon the previous adldap_auth.php that worked.
Creating an entire new skeleton Laravel app with only ADLDAP2 v3, and using fresh configuration files with all the correct settings.
These both result in the same outcome.
Strange, however since you're using OpenLDAP, this is most likely due to an incorrect user DN when authenticating against your server.
Are you sure you're authenticating with the users correct DN? I would check your configuration, notably the account_prefix and account_suffix option.
So if I compare what I've got in my v2 configuration and my v3 configuration, my account_prefix and account_suffix are exactly the same. I also translated:
'username_attribute' => ['username' => 'uid'],
to
'usernames' => [
'ldap' => 'uid',
'eloquent' => 'netid',
],
And of course set 'schema' => Adldap\Schemas\OpenLDAP::class,.
Everything else is a relatively straightforward translation of configuration from one version to another...
Ok, I believe I've figured this out.
Previously, I had to specify my 'admin_username' in the format uid=xxxx,ou=xxxx,dc=xxxx,dc=xxx and specify an 'admin_account_suffix' = ' ' (empty space), but removing that and setting my 'admin_username' to just the username seems to have fixed it.
However, I can now authenticate in php artisan tinker using:
Adldap::auth()->attempt('username', 'password');
=> true
However, doing so via the default Laravel web auth routes tells me I have an incorrect password. I have switched the view to be 'username' instead of 'email'. What might cause this? I believe my ADLDAP configuration must be correct if it's working via tinker?
However, doing so via the default Laravel web auth routes tells me I have an incorrect password. I have switched the view to be 'username' instead of 'email'.
Did you switch the username inside your LoginController as well? By default it's set to email.
In /app/Http/Controllers/Auth/LoginController.php I've got:
<?php
namespace App\Http\Controllers\Auth;
use Adldap\Laravel\Traits\HasLdapUser;
use App\Http\Controllers\Controller;
use Illuminate\Foundation\Auth\AuthenticatesUsers;
class LoginController extends Controller
{
/*
|--------------------------------------------------------------------------
| Login Controller
|--------------------------------------------------------------------------
|
| This controller handles authenticating users for the application and
| redirecting them to your home screen. The controller uses a trait
| to conveniently provide its functionality to your applications.
|
*/
use AuthenticatesUsers, HasLdapUser;
....snip....
public function username()
{
return 'username';
}
}
The easiest way to troubleshoot this, is to dive into your vendor folder and open up adldap/adldap2-laravel/src/Auth/DatabaseUserProvider and dump & die the user after this line:
https://github.com/Adldap2/Adldap2-Laravel/blob/master/src/Auth/DatabaseUserProvider.php#L105
Once you've made that edit, try authenticating and see what's returned. If no user is returned, something in your configuration may be off, and you can begin diving into the resolver and seeing why the query isn't returning your user:
https://github.com/Adldap2/Adldap2-Laravel/blob/master/src/Auth/Resolver.php#L39-L48
Also, the trait you've inserted in your controller is incorrect usage:
use Adldap\Laravel\Traits\HasLdapUser;
This is supposed to be inserted onto your User.php model:
https://github.com/Adldap2/Adldap2-Laravel/blob/master/docs/auth/binding.md
Ah thanks, my mistake with the trait on the controller, I did have that correctly on my User model as well.
Adding dd($user); at line 106 in adldap/adldap2-laravel/src/Auth/DatabaseUserProvider is not returning a user, as you guessed. I'm attempting to figure out what might be breaking down in byCredentials, but no luck yet. Is this method not used in auth()->attempt?
Ok just making sure! And no it's not.
Auth::attempt() is calling the Adldap2's Auth driver, while Adldap::auth()->attempt() is authenticating directly to your LDAP server. The auth driver eventually calls this method, but it needs to locate the user first for several features built into the auth driver.
Right, makes sense.
Is it possible there's some conflict between specifying the account prefix (here for me it's 'account_prefix' => 'uid=' and the specification of 'usernames' => ['ldap' => 'uid', ... ]?
There's definitely a possibility, I would open up the resolver and dump and die the $credentials argument to see what's being searched for in your LDAP server.
Closing due to inactivity.
|
GITHUB_ARCHIVE
|
Shared dispatcher/synchronization context with Renderer
Motivation:
Tests should be deterministic, and one of the biggest challenges with that is that the Renderer can asynchronous (re)render components while test code is being executed.
This can lead to subtle bugs, e.g. where a cut.Find("button").Click() results in the button being found and having a event handler attached when Find is executing, but when Click starts running, that event handler has changed. There are a lot of work arounds and safe guards in bUnit currently to make this as unlikely to happen, but there are probably still edge cases where users would have to wrap their test code in a cut.InvokeAsync(() => ...) call to ensure it runs without the renderer doing renders.
There are still some things we need to figure out:
[ ] How to control and set the sync context at test start
[ ] How to make the "wait for" APIs async (probably needed)
[ ] How to minimize the API surface changes
[ ] Will this affect event triggering?
[ ] A generic solution that works with all testing frameworks (xUnit, NUnit, MSTest)
[ ] How to lead the users to not deadlocking themselves, e.g. if they have a component that is waiting for data and a test that is waiting for a change.
Not sure if this is related to #687 or not, but I do believe the tests should wait for the render to complete.
My ideal scenario would be to not have the "wait for" APIs at all, but instead, make each call only return once the handlers/rendering are complete. e.g.,
var cut = await RenderComponentAsync<CounterComponentDynamic>();
// don't get to here, until the render is complete
await cut.Find("[data-id=1]")).ClickAsync();
// don't get to here until the click event and subsequent render is complete
My understanding is that I can already call .ClickAsync() ... it's just the RenderComponentAsync<T>() that's missing. Is that the intention of this issue?
... you might want to write a test ...
My thinking is that that should be the exception, not the rule. (i.e., what you have now). Not allowing the async-style testing forces the tests to handle scenarios that they are not interested in. (i.e., forcing the test author to figure out when to use "wait for" APIs, and when they need to use InvokeAsync().)
Note, that I would imagine that even though the test doesn't continue until the render (the second render in your example) completes, the two renders still occur. (Catching, for example, exceptions caused by the 'loading...' not working.)
Not allowing the async-style testing forces the tests to handle scenarios that they are not interested in. (i.e., forcing the test author to figure out when to use "wait for" APIs, and when they need to use InvokeAsync().)
bUnit uses Blazor's renderer under the hood, and have to play by its rules. That means that any time an async method is hit during a render of a component, the render is reported as "completed" to the caller, and then the continuation of that async method is scheduled for a later render. So RenderComponentAsync just wont make a difference unfortunately.
It maybe that the sync event handler trigger methods gets deprecated though, such that users will have to call e.g. ClickAsync() when they want to invoke a click event handler, and then the event returned should match when the event handler method completes.
We can avoid having to call InvokeAsync completely if the test code runs in the same sync context as the renderer. When you call InvokeAsync you are essentially switching over to it. Having a shared sync context also means that no render can happen while test code is executing, which avoids race conditions between test code and code under test.
The rule for when you would need to use the WaitFor methods, which would become async methods returning tasks, would be whenever you have something in your components that does not complete synchronously, e.g. if you have at Task.Delay in there. The problem now, which you and others have run into, is that it has not always worked as expected, due to some of the race conditions that have hopefully been mostly fixed now.
@FlukeFan Turns out was not correct. At least in .NET 6 and later, it is possible to get a task back from the Blazor renderer that will not complete before there are no unfinished tasks in the render pipeline. Feel free to jump in on the discussion in #757.
|
GITHUB_ARCHIVE
|
|author||Tao Zhou <email@example.com>||Fri Jul 24 08:05:03 2020|
|committer||Tao Zhou <firstname.lastname@example.org>||Fri Jul 24 08:05:03 2020|
Replace @npm_bazel_rollup with @npm//@bazel/rollup This is needed after we have rules_nodejs upgraded to 2.0 Related changes in gerrit core: https://gerrit-review.googlesource.com/q/topic:%22bump_rules_nodes_to_2.0.0%22 Change-Id: I11fa3f0537950e80a82aca183baa9d4da8f0fbeb Reviewed-on: https://chromium-review.googlesource.com/c/infra/gerrit-plugins/chumpdetector/+/2316257 Reviewed-by: Edward Lesmes <email@example.com>
This plugin will allow developers to see the current status on their CLs. Status will be pulled from a configurable location. The Chromium project has many status sites such as:
The plugin will show users the status and will warn them in certain situations if they attempt to undertake actions that are contrary to the tree status.
The plugin does not provide enforcement - that is provided by other systems.
Before this plugin will work you'll need to configure it. You can do that by adding a new file to
refs/meta/config for a Gerrit project. The configuration file should be called
chumpdetector.config and should be of the form:
[project "some-interesting-project"] loginURL = https://login.appspot.com/?next=chromium-status viewURL = https://chromium-status.appspot.com/ statusURL = https://chromium-status.appspot.com/current?format=json withCredentials = false enforceCommitQueue = false disabledBranchPattern = ^(?:refs/meta/config|refs/heads/.*)$
The project name doesn‘t matter, it can be anything you’d like
withCredentials is true and a request for status fails, the system assumes that a login is required and has not occurred. In that case the status message will change to “Login required. Click here to login.” The text will be a link that points to the value of
loginURL is not set then the message will just be
RequestError: Login required. with no link.
This should be the URL to your status app. This is used to provide a link that users can click on to take them to a new page that shows the full status.
This should be the URL to get a JSON blob of the current status. Usually this will be related to the
viewURL value. This is the URL that will be queried via XHR periodically while the user is on the page to update the tree status.
withCredentials parameter. You can read about it at https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
When true, this will show a modal warning to users if they try to submit the CL directly and warn them to use the CQ. This is Chromium concept, unless you have a Chromium-like CQ system. Just leave as false if you don't understand what any of those things mean.
CLs on branches that match this pattern will have the plugin disabled. This means they won‘t see tree status and tree status won’t effect the CL in any way.
This setting causes the plugin to request a URL via an an image that is dynamically added to the page before making any other fetch requests. This is useful in situations where the request to the status URL will fail without certain cookies being set for the status request domain. By using an image tag the browser will correctly follow any redirects that a login process may require in order to establish the session cookies.
Most installs will not need this configuration and can ignore it. For certain sites this may allow them to get security cookies before attempting status requests which will fail without the cookies.
cd ~/gerrit ln -s ~/chumpdetector ~/gerrit/plugins bazel build //plugins/chumpdetector cp -f ~/gerrit/bazel-bin/plugins/chumpdetector/chumpdetector.jar ~/gerrit_testsite/plugins/ bazel build gerrit java -jar ~/gerrit/bazel-bin/gerrit.war init --batch --dev -d ~/gerrit_testsite ~/gerrit_testsite/bin/gerrit.sh start
When you make changes to chumpdetector
bazel build //plugins/chumpdetector cp -f ~/gerrit/bazel-bin/plugins/chumpdetector/chumpdetector.jar ~/gerrit_testsite/plugins/ ~/gerrit_testsite/bin/gerrit.sh restart
mkdir -p polygerrit-ui/app/plugins/chumpdetector ln -s /path/to/chumpdetector-plugin/src/main/resources/static polygerrit-ui/app/plugins/chumpdetector/static ./polygerrit-ui/run-server.sh -host chromium-review.googlesource.com
You may also need to edit
chumpdetectorURL to point directly at
chromium-review.googlesource.com instead of being relative.
|
OPCFW_CODE
|
groupby aggregation does not preserve dtypes
Note the .astype('int8') in the following
d = pd.DataFrame({c:np.random.choice(range(10),size=20) \
for c in list('abcd')}).astype('int8')
d.groupby(d.columns.tolist(),as_index=False).size().info()
which yields
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 20 entries, 0 to 19
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 20 non-null int64
1 b 20 non-null int64
2 c 20 non-null int64
3 d 20 non-null int64
4 size 20 non-null int64
dtypes: int64(5)
memory usage: 928.0 bytes
Is there some way I can get pandas to preserve the original dtypes? It uses too much memory otherwise.
My pd.__version__ is 1.3.5, and same result with 1.5.3 (but not 2.0.2 as @rickhg12hs comments!). According to this answer the conversion should not occur, but that's inconsistent with what I observe.
With pd.__version__ == '2.0.2' on my machine, each Dtype is int8, except the size row which is int64.
Use .astype(np.int8) instead?
I think this is something that got cleaned up. The issue is coming from your index - I tried converting it to int8 but it doesn't want to take: https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.Int64Index.html
Tried tricking it with a multiindex too, that didn't work
I don't think you want groupby here:
samp = pd.DataFrame([[1,2,3],[2,3,4]],columns=['a','b','c'])
print(samp.groupby(samp.columns.to_list()).groups)
{(1, 2, 3): [0], (2, 3, 4): [1]}
You're creating a dataframe where the index is your values, which is only really useful if you're trying to count how many times each row occurs, in which case better options exist:
counts = np.unique(d.values, return_counts=True, axis=0)
ret = pd.DataFrame(counts[0], columns=d.columns)
ret['size'] = counts[1].astype('int8')
A print(ret.dtypes) will assure you we're entirely still in int8.
|
STACK_EXCHANGE
|
However, privacy is still an issue for many critics of Bitcoin as transactions are recorded on a public and open ledger. And there is no shortage of projects designed to iterate this transparency.
Enigma Secret Contracts
Enigma is building a general protocol that allows for privacy to be maintained when interacting with smart contracts. In short, Enigma consists of a decentralized supercomputer run by multiple nodes, which are capable of running private computations. The nodes are, in return, rewarded with ENG tokens.
For example, the data contained by a particular smart contract on the Ethereum network can be encrypted and sent to the Enigma network. The nodes running the so-called supercomputer are in charge of running computations on the encrypted data to verify the validity of the transaction without compromising security.
Nodes are incentivized to act honestly by being rewarded with ENG tokens upon correctly verifying data. The benefit of this approach is that any blockchain that supports the Enigma protocol can provide an extra layer of security for its smart contracts.
A zero-knowledge proof (ZKP) adds a considerable layer of privacy to a public blockchain. It is foremost intended to hide the transaction history for a specific account. With ZKP, nodes are capable of verifying a transaction without seeing the actual amount being transacted.
ZKP is based on a game in which a “prover” tries to demonstrate to a “verifier” that a secret or statement is true, without revealing the secret itself. The verifier can ask questions in order to reduce the chance the prover is lying. By asking the same, simple A or B questions over and over, the verifier is able to reduce that chance from 50 percent to less than 0.00001 percent.
It is an interesting concept to imply privacy; however, verifying all of these questions requires a lot of computational power and time. Adding to that, a slightly custom algorithm may be needed every time depending on what you want to prove.
Even in an anonymized network, it is possible to figure out step-by-step where transactions are coming from, compromising a user’s identity. It is possible to operate a “spy node” that, over time, would involve noting all transaction details that pass the node. Using this information, the node can gradually build up a picture of where coins were located in obscured networks.
For the Bitcoin network, it is even possible to analyze the timing of each block being broadcasted and trace back with high probability to a transaction’s source node. From here, the spy node has high odds of gleaning the IP address of the transaction sender.
The Dandelion protocol works by sending transactions on a random path through the network, diffusing the transaction data across the network. This would make it nearly impossible to follow the breadcrumb trail.
Ring Confidential Transactions (Ring CT)
Monero implemented the concept of Ring CT as a privacy feature in its protocol. Using Ring CT, users can obfuscate the amounts they are transferring but also allow miners to verify their transactions without knowing the exact amounts.
For example, Bob wants to send Alice Monero (XMR). When transferring Monero, a transaction secret is shared between Bob and Alice, encrypted through Alice’s public key. This secret key is used to encrypt the transacted amount. Also, this secret can be decoded by Alice with her private key so she can verify that Bob is sending the correct amount of XMR.
But how are the miners able to verify the transaction? Third-party observers like miners won’t be able to decrypt the transacted amount. However, a Pedersen commitment is part of the Ring CT concept.
A Pedersen commitment is some cryptographic range proof that is added to the transaction. Miners are able to use the range proof to compute if the transacted output is greater than zero and smaller than a random number. It is a complex mathematical computation that allows miners to verify the transaction.
Stealth addresses are used by multiple blockchains, including Bitcoin, Verge and Monero. However, the Bitcoin blockchain does not support this natively, so both sender and receiver must take part in this process.
A stealth address requires the sender to create a random one-time address per transaction based on the recipient’s public address. The address is created using the so-called “public view key” and “public spend key” scrambled with random data.
The wallet addresses will not be publicly exposed during the transaction process. The one-time address is unlinkable to the original transaction but also unlinkable to any other one-time addresses that have been created for the recipient.
After the funds have been sent to the one-time address, the recipient can derive the secret key associated with this address and retrieve the funds. Only the sender and receiver will know a transaction occurred between them as no wallet addresses were made public.
Stealth addresses are a clever mechanism to retain privacy. Monero supports this feature by default for basic transfer transactions.
Other interesting privacy concepts include Mimblewimble, zk-SNARKs, and coin mixing and change addresses.
|
OPCFW_CODE
|
C++ abstract variant implementation
Is there any implementation of a variant like boost::ant or boost::variant, but with abstract interface, out there?
What I want is to pass variants between DLLs in a loosely coupled app. So if one DLL starts to store something new in the variant, I want to avoid changing the code of all other DLLs. All of the DLLs are build with different versions of VisualStudio with static CRT linkage, so one can't use STL classes in interfaces. Boost dependency is also undesirable. Thats why I wans an abstract interface.
If I had to implement it, I would make an abstract interface with functions like MyVariantInterface::Get/SetData(int value_type_tag, byte* data) = 0, which can be passed between DLLs safely, plus a templated wrapper which allows convenient storage, extraction and does all size/type checks inside the scope of one DLL.
Does something like this already exist?
Do you need one with predefined types or any number of of user defined types? E.g. QVariant from QT supports a limited amount of types only, but is probably pulling in too much if you don't depend on QT yet.
Unfortunately we don't use QT project-wide;(
User defined types are very appealing, tough a predefined-type solution is definitely worth considering too.
It's a bit old now, but this kind of requirement is what COM and DCOM were designed for. That is, different versions of C/C++ and more notably different languages altogether. COM has an interface compiler which allows you to expose your classes and interfaces in an abstract manner. As egur described there is a VARIANT type which can be used to encapsulate various types. For loose coupling and late binding you might also like to consider the IDispatch interface which lets you expose your components as a call-by-name types
https://en.wikipedia.org/wiki/IDispatch
Microsoft also ships a C++ SDK for COM called ATL (Active Template Library) which is a set of useful classes that can be used to author COM components. ATL also gives you several nice wrappers to the C COM types like VARIANT and which can be used to make implementing the calling conventions and lifecycle events for COM a bit easier. Namely CComVariant for your purposes
https://learn.microsoft.com/en-us/cpp/atl/reference/ccomvariant-class?view=msvc-160
There are also some other support wrapper classes that don't rely on ATL and the one that comes to mind in your case is _variant_t
https://learn.microsoft.com/en-us/cpp/cpp/variant-t-class?view=msvc-160
For Windows you can use VARIANT which is used a lot in COM.
No extra dependencies. VARIANT supports many types including COM interfaces (e.g. IUknown). You can even pass multi dimensional arrays with it.
Thank you. Do you know a good example or documentation on this? MSDN article is somewhat cryptic.
All the functions are here. If you have a specific question maybe you should post a new SO question.
|
STACK_EXCHANGE
|
Word2Vec: change of parameter, same results
I'm trying to train Word2Vec models and I would like to create an embedding to average the results over different models. The problem is that I am not getting the results I expected. In fact, even if I change the parameters I end up I am getting odd results.
The corpus is developed in this way, where 'documents' is a list of tweets.
word_corpus = [[str(token).lower() +'_' + str(token.pos_) for token in nlp(sentence) if token.pos_ in ('NOUN', 'VERB', 'ADJ') and len(str(token))>1] for sentence in documents]
Here the two models:
# initialize model
w2v_model1 = Word2Vec(vector_size=100, # vector size
window=3, # window for sampling
sample=0.01, # subsampling rate
epochs=10, # iterations
negative=10, # negative samples
min_count=11, # minimum threshold
workers=-1, # parallelize to all cores
hs=0 # no hierarchical softmax
)
# build the vocabulary
w2v_model1.build_vocab(corpus)
# train the model
w2v_model1.train(corpus, total_examples=w2v_model1.corpus_count, epochs=w2v_model1.epochs)
#####
w2v_model2 = Word2Vec(vector_size=100, # vector size
window=7, # window for sampling
sample=0.01, # subsampling rate
epochs=120, # iterations
negative=3, # negative samples
min_count=100, # minimum threshold
workers=-1, # parallelize to all cores
hs=0 # no hierarchical softmax
)
# build the vocabulary
w2v_model2.build_vocab(corpus)
# train the model
w2v_model2.train(corpus, total_examples=w2v_model2.corpus_count, epochs=w2v_model2.epochs)
and to evaluate them i am using the following syntax:
emb_df1 = (pd.DataFrame([w2v_model1.wv.get_vector(str(n)) for n in w2v_model1.wv.key_to_index],index = w2v_model1.wv.key_to_index)).T
emb_df2 = (pd.DataFrame([w2v_model2.wv.get_vector(str(n)) for n in w2v_model2.wv.key_to_index],index = w2v_model2.wv.key_to_index)).T
And I get this results.
As you can see the number of words gets different but what seems odd to me is that, for words that are analyzed in both models, I get exactly the same coordinates and I can't understand why. If i well understood its working mechanism the results are provided after some randomization steps and so it should be basically impossible to get the same results everytime, so I am not able to understand to what could be due this error.
Have you examined your corpus to ensure it's of the size, and contents you expect? (What's len(corpus) & corpus[0]?) Have you enabled logging to at least the INFO level, & watched the logging output for confirmation that the expected steps, with somewhat varying treatments, are being applied?
len(corpus) = 100000
corpus[0] = 'prices_NOUN' .
I set logging option as logging.basicConfig(filename = 'Test.log',level = logging.INFO) and applied it to
logging.debug(w2v_model1.train(corpus, total_examples=w2v_model1.corpus_count, epochs=w2v_model1.epochs)).
Looking better at its output : it collects 46164 out of 2412867 words and returns
100(EPOCH) times
INFO EPOCH - 1 : training on 0 raw words (0 effective words) took 0.0s, 0 effective words/s WARNING:EPOCH - 1 : supplied example count (0) did not equal expected count (100000)
To what could be due?
It's much easier to review code & output if you edit your answer to add it, with formatting, than squeeze into these unformattable comments. But I already see several potential issues: (1) above, your corpus is word_corpus, but here, it's corpus – if corpus is the real one, exactly how is it prepared? (2) if corpus[0] is just a string, it's the wrong format: each item in the corpus should be a list of string words – a single word like 'prices_NOUN' isn't a text. …
(3) if logs show zero training has happened, the corpus is likely broken. So any "results" you're seeing are just the model's unchanged random-initialization. To be sure you have a properly re-iterable corpus, you can try running the line print(sum(1 for _ in corpus)) multiple times in a row on the exact same corpus - if it always prints the same expected length, you know that at least corpus can be re-iterated over. If it ever prints 0, you have some sort of error, such as using a single-use iterator (such as a generator) instead of a truly re-iterable Python sequence.
|
STACK_EXCHANGE
|
Where do I ask a question about a specific case regarding Wikipedia?
I first tried asking the following question on the Web Applications site. The question was closed as opinion-based on that site. I asked what I can do on the meta site. I was told that the question is not a good fit for Web Applications, because it is about a specific case related to competency rules rather than using Wikipedia. So, on what site could I ask a question such as the following?
What steps do I take if I am indefinitely site-banned on Wikipedia?
My Wikipedia user page is at https://en.wikipedia.org/wiki/User:Neel.arunabh. Unfortunately, I have been banned indefinitely by the community with a discussion at https://en.wikipedia.org/w/index.php?oldid=1066833242#Neel.arunabh's_competence_issues. An "indefinite" block is not an "infinite" block. So, what steps can I take so that I will be allowed to resume to Wikipedia?
Have you tried contacting Wikipedia people?
This is like asking Ebay how to fix an issue on Amazon or asking Samsung how to fix your Apple phone
@ArunabhBhattacharya - Let me clear. We cannot help you get unbanned from Wikipedia. However, I find it hilarious that someone indicated that an indefinite ban is not an infinite ban since the literally meaning of the word is unlimited. The literal meaning of infinite is limitless. Your question is not within scope on any SE community at this time. Appeal your ban, but based on your actions I doubt that will happen. Don’t reply to this comment, I won’t reply, and will flag any response from you as unnecessary I find your actions at Wikipedia personally deplorable.
The referenced Wikipedia process does not inspire a lot of confidence - "This behavior raises concerns about both English-language comprehension and potentials for copyright infringement in mainspace. ... abusive sockpuppetry ... disastrous attempts to fix things, like "fixing" reference errors by deleting references and content from articles ... once again display a poor command of English to the point that it interferes with his ability to make or understand arguments ... can't express simple thoughts in your own words"
It is not much better here on Stack Exchange. From the Spanish language meta site: Plagiarised question on Spanish (of this one). And that was only about one month ago. Perhaps it is time to drop plagiarism from your toolbox? Yes, that is a rhetorical question.
This question is about the inner-workings of another community (Wikipedia), and probably won't be a good fit anywhere on Stack Exchange.
Wikipedia itself has a few resources about appealing bans, I assume that would be a good place to start.
|
STACK_EXCHANGE
|
Senior Director of Partner & Customer Success at ERP Maestro. Ryan is an industry veteran and former IBM Security consultant.
January 2020 Release Notes
Happy New Year! Hopefully everyone had a fun-filled holiday season with family and friends. Since our last release, the development team has been extremely busy working on some fantastic enhancements and fixes. You’ll see below updates that address the rulebook change log, the Automated Provisioning module, and more! The January release is expected to occur on January 16, 2020.
As always, thank you for all of the feedback and product suggestions. As you and your colleagues think of new ideas, please send them directly to me or post on our Feature Requests page.
Below are the features in the release.
Rule Import and Change Logs
Simplified Rulebook Import
With this release, if a rulebook is not in an import file, it will not be deleted from a customer’s account. To delete a rulebook, you have to use the application. This was changed to enable rulebook imports to only focus on the rulebook or rulebooks that are being changed. It also helps to avoid an accidental deletion of a rulebook.
Performance Improvements to Rulebook Change Logs
We enhanced the performance and capacity capabilities of generating rulebook change logs. You will now be able to generate change log files for a much greater timespan given the number of rulebooks and the changes that have been made of that time period. We would recommend generating change log files on a monthly basis to keep the downloadable files manageable.
Business function permission changes were still recorded in change logs even when rulebooks are imported without any business function permission changes.
New Feature: Partial Role Provisioning
Many customers have asked for the capability to provision any approved roles for a request even if one or more roles have been rejected. There is a new checkbox on the Automated Provisioning Request screen that will enable this capability for a request.
New Feature: Comment Field
To help improve the timeliness of processing a provisioning request and maintaining complete audit record, we added an optional comment field to provisioning requests. Customers may choose to leverage this field to input a ticket number from their ticketing system for end-to-end traceability of provisioning requests.
New Feature: Search for Role by T-code
In addition to just searching for roles by role names, users can now search for roles that contain T-codes. Once the list of roles is displayed, the user can select that ones to be provisioned in the request.
We have added the user’s full name and department to the list of users to select from. The class column has been changed to User Group.
Automated Provisioning Dashboard
The request dashboard has been enhanced to include the user’s full name as value for the “Requested For” column.
Updated Email Notifications
In addition to sending an email to the requestor to notify a successfully completed request, an email will also be sent to the end user whose access was modified.
Updated Provisioning Logs
The provisioning logs have been updated to include the user’s full name details.
Access Review Action Log Report Change
The Access Review Action Log report has been updated to include a new column called “DelegatorFullName” before the “OldReviewer” column. This column will be populated with the full name of the user that delegated the review item.
|
OPCFW_CODE
|
License out the engine to a select few people who have shown interest in the engine and can do the job.
Showcase their work in the Kickstarter video after infinity battlescape pitch. Do not showcase them in the main KS video.
Tell them about these other projects in KS updates saying that “We want to be an engine and game company however we need money to do so.” or Something along these lines.
Offer Licenses for the engine as part of the Kickstarter rewards.
I can see the benefits in several ways -
People will have an incentive to back. Not only would they be getting one game but they would be getting multiple by supporting engine and the company. Basically by investing so to speak in the companies future they get Engine licenses and the possibility of other games.
Engine Licenses for developing games.
More Money for developing the engine and the game.
They’re most of the way through their Kickstarter, using an engine that is not ready to be used by anyone but their team. What you’re talking about isn’t remotely practical. They’d have to get the engine in a form ready for use by third parties, support that engine, wait for the third parties to develop something worth showing in INS’s Kickstarter, then fold whatever great stuff the third parties come up with into it. And all the while that the third parties are developing those great applications, INS would be on the hook for supporting and enhancing the engine to allow the third parties to build their separate visions.
This could go on for a couple years, and most of the those independent developers - who aren’t on the hook for doing anything at all - will just get bored, get married, get a new job, have a child - and move on. “But you can have my game source code” will be their parting words.
So just sit tight and wait like everyone else. You can spend your time hoping that the Kickstarter will have a donation tier that allows you to get early access to the engine. Then you can support INS financially, get your hands on the engine, build the game of your dreams, and show the world how great the INS engine is.
@JB47394 is correct however we are considering the possibility of making the engine available as a pledge tier during the Kickstarter. It would have to be with the understanding that:
We still have some major engine work to do and the API may not be stable.
We still have a lot of work to do on tools. In fact our engine will only be immediately useful to programmers, or artists working with programmers, as our tools need a lot of love.
We have no documentation whatsoever at the moment. For those of you who have significant game/engine programming experience we can write up something in fairly short order that would be enough to get you pointed in the right direction. For those of you with little to no game/engine programming experience you’re likely in for a world of hurt in the near-term. That being said we actually do have a number of tutorials programs we’ve written to show people how to use the engine. Ironically we also use them as unit tests for the engine
Well, as long as you specifically refer to it as “Access to the Engine Alpha” it shouldn’t be much of a problem. I mean, who puts himself into this sort of things should know what an Alpha is.
About game Alpha/Beta tier, will modding tools travel along with them or we’ll see them later (around Beta/latter Beta)? I would like to bash my head on them even with little documentation, would be quite an experience.
I’m only guessing here but making the engine available to the sort of tinkerers that wont mind its unfinished state might be directly beneficial to you. A community of tinkerers (wishful thinking?) would be best suited to finding and reporting the breaking points and perhaps even help fill in the gaps (/voids) in documentation for each other as they work things out.
That is currently the plan though don’t hold me to it until we formally announce our pledge tiers.
I couldn’t agree more. This is an active area of discussion for us, we want to get the tech into the hands of modders asap however we have to balance the development of the mod tools and the game - the game itself will take priority. This is why we’re still debating exactly how/when we’re going to give you guys access to the engine.
|
OPCFW_CODE
|
Add Browser middleware
Inspired by Guzzle middlewares I added a middleware system to the Browser component.
I took the already existing \React\Http\Io\MiddlewareRunner as an example for this.
Example usage:
class MyCustomMiddleware implements \React\Http\Browser\MiddlewareInterface
{
public function __invoke(RequestInterface $request, callable $next)
{
return $next($request->withHeader('X-Foo', 'Bar')); // I know this is also possible another way, but just as a demo
}
}
$client = (new Browser())->withMiddleware(new MyCustomMiddleware());
$client->get('https://example.com/api/v1/demo');
My concrete use case right now is that I want to keep track of the last request for advanced logging. I do this by storing request information in a custom middleware.
Please let me know what you think and if I should apply some changes to my code in order for this to get merged :)
Done this some years ago over at https://github.com/orgs/php-api-clients/repositories?q=middleware-&type=all&language=&sort= and would love to see this land directly in here instead of building on top of it. Things like gzip and other compressions could also benefit from this. Other things that come to mind are metrics about requests per route/method/whatever, tracing, or authentication. Maybe even #445 could benefit from this before it lands directly in this package.
This, IMHO would be a great feature candidate for our next major: v3.
Hi @WyriHaximus,
happy to hear that this feature can be useful for others as well.
But the next new major version would be v2, not v3, wouldn't it?
Hi @WyriHaximus, happy to hear that this feature can be useful for others as well.
Well, I'm not the only one to convince, talked about this with @clue and @SimonFrings in the past and we had other features to include back then.
But the next new major version would be v2, not v3, wouldn't it?
We went with v3 instead of v2 for aesthetic reasons and to keep it in line with other packages already having a v2. With v3 we're pulling everything on the same minimum version again.
@R4c00n Thanks for opening this PR, this is definitely a cool feature and something I want to see being part of this component! 👍
I also don't know what the error in the PHP5.3 builds wants to tell me.
It seems like it has some problem with the invoke() of your interface, but I think we don't even need a new interface for this and ultimately will resolve the problem with PHP 5.3.
But the next new major version would be v2, not v3, wouldn't it?
As @WyriHaximus said, we decided to go for v3 instead of v2 to keep our versioning consistent across all our projects (especially because of promise v3), you can read all about our decision here: https://github.com/orgs/reactphp/discussions/472#discussioncomment-3680968
Also a little heads up, once all tests are green and this is ready to merge, make sure to combine your commits into one. Nothing to worry about now, I think it makes most sense to do this, once everything is in and functional ;)
It seems like this ticket is open for quite a while now and haven't received any updates since. To avoid issues and pull request laying around for too long, I'll close this for now.
We're currently working towards a v1.10.0 of our HTTP component and once this is released, we can start with the feature implementations for HTTP v3 as discussed in #517. As @WyriHaximus said above, this could be a good v3 candidate so we can reopen/revisit this once we're sure this will be part of the v3 release. If this doesn't involve any BC breaks, it's also possible to add this in a following v3.1, v3.2, etc. 👍
|
GITHUB_ARCHIVE
|
System76 has today released the latest version of their own Linux distribution, with Pop!_OS 22.04 LTS.
The basics are that they've rebased on top of Ubuntu 22.04 for their packaging, with it being an LTS it's supported for 5 years with updates. Also included is the more up to date Linux Kernel 5.16.19 at release (and they do regular updates), Mesa 22 drivers and GNOME 42 with their COSMIC interface. Plenty more tweaks are included with this release too, so here's a quick run-down.
Automatic updates are available in the new OS Upgrade & Recovery panel in Settings, allowing you to set what time and day you want them to happen. Notifications for updates are also now shown weekly, they said it's to "reduce distractions" but you can change that frequency too.
The settings menu also has a new Support panel, giving you various options to get help from articles to support chat. You can change what background you want between light and dark modes, better performance with their newer System76 Scheduler — they said this "optimizes performance by directing resources to the window in focus" which should help gaming. PipeWire is now the default for audio too, so you get the latest and greatest way to handle it all on Linux, something I am a big fan of.
Pop!_Shop, their software store, saw plenty of upgrades too including:
- Backend code improvements for more responsive operations
- Improved reliability for package operations (update, install, etc.)
- UI Improvements to aid in allowing small window sizes for tiling
- Update and Install buttons now also function as a progress bar
- New "Recently Updated" homepage section highlighting newly added/updated apps
Various other improvements include better multi-monitor support with the workspaces view, a fixed layout on HiDPI displays and increased performance.
Outside of that, they sent over this list:
- Installed NVIDIA drivers are now visible in Pop!_Shop, and will no longer include an “Install” button. Older drivers are also available to install, though the most recent available NVIDIA driver is recommended for most NVIDIA GPUs.
- Better performance with improvements to the CPU scaling governor, which keeps your CPU running at the optimal frequency for your system.
- The Pop!_OS upgrade service will now only activate when checking for or performing release upgrades. (Previously it was active 24/7.).
- If your upgrade gets interrupted, debian packages are now resumable—meaning you can pick up the upgrade from where you left off.
- File type for icons has been changed to .svg.
- Max disk capacity for journald logs is now limited to 1GB.
- Added support for laptop privacy screens.
- RDP by default for remote desktop use.
- Better performance, scaling, and reliability in Pop!_Shop.
- Added this funky new user icon.
|
OPCFW_CODE
|
Summary: Big data study combines information from a diverse set of experiments to identify patterns of brain activity common across people and tasks.
Source: U.S Army Research Laboratory
A big data approach to neuroscience promises to significantly improve our understanding of the relationship between brain activity and performance.
To date, there have been relatively few attempts to use a big-data approach within the emerging field of neurotechnology. In this field, the few attempts at meta-analysis (analysis across multiple studies) combine only the results from individual studies rather than the raw data. A new study is one of the first to combine data across a diverse set of experiments to identify patterns of brain activity that are common across tasks and people.
The Army in particular is interested in how the cognitive state of Soldiers can affect their performance during a mission. If you can understand the brain, you can predict and even enhance cognitive performance.
Researchers from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory teamed with the University of Texas at San Antonio and Intheon Labs to develop a first-of-its-kind mega-analysis of brain imaging data–in this case electroencephalography, or EEG.
In the two-part paper, they aggregate the raw data from 17 individual studies, collected at six different locations, into a single analytical framework, with their findings published in a series of two papers in the journal NeuroImage. The individual studies included in this analysis encompass a diverse set of tasks such simulated driving and visual search.
“The vast majority of human neuroscientific studies use a very small number of participants employed in very specific tasks,” said Dr. Jonathan Touryan, an Army scientist and co-author of the paper. “This limits how well the results from any single study can be generalized to a broader population and a larger range of activities.”
Mega-analysis of EEG is extremely challenging due to the many types of hardware systems (properties and configuration of the electrodes), the diversity of tasks, how different datasets are annotated, and the intrinsic variability between individuals and within an individual over time, Touryan said.
These sources of variability make it difficult to find robust relationships between brain and behavior. Mega-analysis seeks to address this by aggregating large, heterogeneous datasets to identify universal features that link neural activity, cognitive state and task performance.
Next-generation neurotechnologies will require a thorough understanding of this relationship in order to mitigate deficits or augment performance of human operators. Ultimately, these neurotechnologies will enable autonomous systems to better understand the Soldier and facilitate communications within multi-domain operations, he said.
To combine the raw data from the collection of studies, the researchers developed Hierarchical Event Descriptors (HED tags) – a novel labeling ontology that captures the wide range of experimental events encountered in diverse datasets. This HED tag system was recently adopted into the Brain Imaging Data Structure international standard, one of the most common formats for organizing and analyzing brain data, Touryan said.
The research team also developed a fully automated processing pipeline to perform large-scale analysis of their high-dimensional time-series data–amounting to more than 1,000 recording sessions.
Much of this data was collected over the last 10 years through the U.S. Army’s Cognition and Neuroergonomics Collaborative Technology Alliance and is now available in an online repository for the scientific community. The U.S. Army continues to use this data to develop human-autonomy adaptive systems for both the Next Generation Combat Vehicle and Soldier Lethality Cross-Functional Teams.
About this neuroscience research article
Source: U.S Army Research Laboratory Media Contacts: Patti Riippa – U.S Army Research Laboratory Image Source: The image is credited to the U.S. Army.
Automated EEG mega-analysis I: Spectral and amplitude characteristics across studies
Significant achievements have been made in the fMRI field by pooling statistical results from multiple studies (meta-analysis). More recently, fMRI standardization efforts have focused on enabling the joint analysis of raw fMRI data across studies (mega-analysis), with the hope of achieving more detailed insights. However, it has not been clear if such analyses in the EEG field are possible or equally fruitful. Here we present the results of a large-scale EEG mega-analysis using 18 studies from six sites representing several different experimental paradigms. We demonstrate that when meta-data are consistent across studies, both channel-level and source-level EEG mega-analysis are possible and can provide insights unavailable in single studies. The analysis uses a fully-automated processing pipeline to reduce line noise, interpolate noisy channels, perform robust referencing, remove eye-activity, and further identify outlier signals. We define several robust measures based on channel amplitude and dispersion to assess the comparability of data across studies and observe the effect of various processing steps on these measures. Using ICA-based dipolar sources, we also observe consistent differences in overall frequency baseline amplitudes across brain areas. For example, we observe higher alpha in posterior vs anterior regions and higher beta in temporal regions. We also detect consistent differences in the slope of the aperiodic portion of the EEG spectrum across brain areas. In a companion paper, we apply mega-analysis to assess commonalities in event-related EEG features across studies. The continuous raw and preprocessed data used in this analysis are available through the DataCatalog at https://cancta.net.
Automated EEG mega-analysis II: Cognitive aspects of event related features
We present the results of a large-scale analysis of event-related responses based on raw EEG data from 17 studies performed at six experimental sites associated with four different institutions. The analysis corpus represents 1,155 recordings containing approximately 7.8 million event instances acquired under several different experimental paradigms. Such large-scale analysis is predicated on consistent data organization and event annotation as well as an effective automated preprocessing pipeline to transform raw EEG into a form suitable for comparative analysis. A key component of this analysis is the annotation of study-specific event codes using a common vocabulary to describe relevant event features. We demonstrate that Hierarchical Event Descriptors (HED tags) capture statistically significant cognitive aspects of EEG events common across multiple recordings, subjects, studies, paradigms, headset configurations, and experimental sites. We use representational similarity analysis (RSA) to show that EEG responses annotated with the same cognitive aspect are significantly more similar than those that do not share that cognitive aspect. These RSA similarity results are supported by visualizations that exploit the non-linear similarities of these associations. We apply temporal overlap regression, reducing confounds caused by adjacent event instances, to extract time and time-frequency EEG features (regressed ERPs and ERSPs) that are comparable across studies and replicate findings from prior, individual studies. Likewise, we use second-level linear regression to separate effects of different cognitive aspects on these features across all studies. This work demonstrates that EEG mega-analysis (pooling of raw data across studies) can enable investigations of brain dynamics in a more generalized fashion than single studies afford. A companion paper complements this event-based analysis by addressing commonality of the time and frequency statistical properties of EEG across studies at the channel and dipole level.
|
OPCFW_CODE
|
Are docs available for older versions of wowchemy?
Prerequisites
[X] I have searched for duplicate or closed feature requests
[X] I am mindful of the project scope
Proposal
It would be nice to have access to documentation for older versions of wowchemy, as is the case with other docs pages (e.g. readthedocs).
Motivation and context
The API for wowchemy changes frequently, the current docs are not applicable to the version of wowchemy I am currently running, and upgrading wowchemy often breaks my website.
This is an independent community-driven open source and open-science project created by creators/researchers for creators/researcher. So the documentation is dependent on what the community contributes. As we are an independent community, not a big corporation those behind RStudio or Wordpress, please, if you find an opportunity for improvement, help support independent open source by contributing improvements :)
If you are interested in supporting independent open source and open science, you can find the contributor docs here: https://github.com/wowchemy/wowchemy-hugo-themes/blob/main/CONTRIBUTING.md
Some of the most popular docs pages which the community has contributed to can be found at https://github.com/wowchemy/wowchemy-hugo-themes/tree/main/docs/en which is versioned in Git, so you can travel back in time. We are in the process of migrating the rest of the docs pages to this folder.
Also, you should find documentation and changes in the release notes in GitHub Releases (or on the blog for older versions).
The documentation pages on the site, also try to highlight where possible how to perform an action with older and newer versions of the software.
The reason for a number of breaking changes in Wowchemy over the last few years is due to breaking changes in Hugo, which we obviously have to adopt if the software is to be compatible with new versions on Hugo. The Hugo community is one of the largest static site generator communities, so there tend to be a high velocity of improvements. Every site should be tied to a specific version of Wowchemy and Hugo so there is no risk of a site breaking when Hugo team release a new update. Wowchemy is now in a very stable phase where all the main user feedback has been addressed. You will find smaller GitHub projects will likely cause you more concern due to their smaller communities and even more rapid roll out of breaking changes due to them scaling up.
If you have any questions regarding Wowchemy or Hugo, just follow the links to ask a question and raise it in our Discord community (rather than raising questions as Github issues) and the large community will of course try to help you. To complement this, there is also a huge Hugo community on the Hugo forum which are happy to try to help with any Hugo issues.
Hi Geo! Thanks for taking the time to respond to this issue.
It makes sense that Wowchemy changes over time, whether it be due to functionality being added or due to changes to Hugo itself.
However, coming back to @scottgigante-immunai 's original question: is there a way to access the Wowchemy documentation website for different versions of Wowchemy?
You mention Wowchemy is open-source and that the community has contributed to the documentation website, but if the repository for the documentation was open source we could just go back in time ourselves and look at the documentation for Wowchemy v5.2.* without any issue.
Thanks @gcushen . As an example, if I was interested in creating a menu for my site that's built on Wowchemy v5.2, where could I find the equivalent page for https://wowchemy.com/docs/getting-started/get-started/#menu ? I don't see it in the docs/en directory you linked to.
Unfortunately the solution here (for anyone reading along at home) was to stop using Wowchemy. We instead migrated our website to Quarto.
|
GITHUB_ARCHIVE
|
I found the irfs from my non-linearized model (for which, I directly put into the model section, without exp(), nor loglinear option, and in this command::stoch_simul(order=1, periods=2000, irf=100) ), is different from the irfs from the hand log-linearized above model (for which, I put after the model (linear); command, and with the same: :stoch_simul(order=1, periods=2000, irf=100) )
is it possible that the irfs from the linearized and log-linearized not the same, or I made a mistake somewhere, these irfs should be the same.
P.s. there are variable whose steadystate value is 0. And I am not sure I do this correct, in log linearizing model, I use the following linearization for 0 steady-state variables:
If I understand your description correctly, you are comparing non-logged levels in the nonlinear model to the logged version in the linearized model. Those IRFs will obviously not be the same, because one is in percent, the other not.
I know their values of change (the values in the vertical axis in the irf graphs) are different (one is in percentage and the other is in levels) , but will the shape of the irfs the same, I tried the simple RBC model and found though their values are different, their shapes look the same.
Thanks a lot for the clarification. Sorry for the confusion, I thought you mean take log first. Yes, I am using your approach for hand log linearization. But the irfs from my hand linearized model is different from the irfs from the non-linearized model. Not sure where goes wrong.
For taking logs, I saw Eric sims’s notes doing so. e.g:
There are different ways to log-linearize at first order, but they should all results in the same outcome. @Olivia If you are unsure how to do this, then you should not be linearizing by hand. It’s too error-prone. Let the compute do it. See also Why do we log linearize a model by hand?
Regarding steady state 0: in this case, you do not log-linearize, you linearize. Thus, the IRFs should be identical to the nonlinear version at first order.
Sorry to be repetitive, Just wanna double check with you,
Does it mean
if I use exp() for log linearization, I will not use exp() on 0-steady-state-value variables, but on other non-0-steady-state-value variables which I want to measure in percentage?
If I want to hand log linearize the whole model by hand, I will leave 0-steady-state-value variables as x_t itself or replace it by x^hat_t (in fact they are the same?), and not replace it by x_ss * exp(x^hat_t) as other non-0-steady-state-value variables do?
where x_t is the variable itself,
x_ss is the steady state of the variable,
and x^hat_t is the deviation from steady state for 0-steady-state-value variables, and percentage deviation from steady state for non-0-steady-state-value variables.
|
OPCFW_CODE
|
Practice 3
Please fix the following:
Just a matter of taste, but I would move this group below the #Parameters and add an explanatory title, like global variables or similar. Parameters are the first because the values come from outside and they determine the node behaviour. Global variables are internal to that node.
https://github.com/jaak-peterson/autoware_mini_practice/blob/0377df857d517db02bcb1db841aabd2f651e890c/practice_3/nodes/control/pure_pursuit_follower.py#L15-L16
You have exactly the same for loop over msg.waypoints two times. Reorganize the code that the loop is done only once.
https://github.com/jaak-peterson/autoware_mini_practice/blob/0377df857d517db02bcb1db841aabd2f651e890c/practice_3/nodes/control/pure_pursuit_follower.py#L31-L32
As discussed in the lecture, these two assignments:
https://github.com/jaak-peterson/autoware_mini_practice/blob/0377df857d517db02bcb1db841aabd2f651e890c/practice_3/nodes/control/pure_pursuit_follower.py#L36
https://github.com/jaak-peterson/autoware_mini_practice/blob/0377df857d517db02bcb1db841aabd2f651e890c/practice_3/nodes/control/pure_pursuit_follower.py#L42
should be both together at the end of callback and for self.distance_to_velocity_interpolator the assignment should be done from a local variable, so it should be first calculated to a local variable.
The problem here is that path_callback runs very rarely, but it might take longer than one cycle of current_pose_callback and then the self.path_linestring and self.distance_to_velocity_interpolator might not be in sync any more.
It would be even better to use threading.lock (not strictly necessary here, but you can try)
And threading lock should be used also at the beginning of the callback that needs to use the global variables, but then the global variables need to be assigned to local variables.
import threading
# inside class init
self.lock = threading.Lock()
# in the callback
with self.lock:
self.path_linestring = path_linestring
self.distance_to_velocity_interpolator = distance_to_velocity_interpolator
OK
OK
Some comments:
path_callback
https://github.com/jaak-peterson/autoware_mini_practice/blob/8136687018f903b9cee09250bd1b6d785b69cf3b/practice_3/nodes/control/pure_pursuit_follower.py#L47-L49
I would first create interp1d into local variable and in the self.lock group do only the assignment to global (class) variable.
current_pose_callback
https://github.com/jaak-peterson/autoware_mini_practice/blob/8136687018f903b9cee09250bd1b6d785b69cf3b/practice_3/nodes/control/pure_pursuit_follower.py#L51-L57
There is a possibility that after checking that the global variable is not None it will be changed to None just before assignment to a local variable.
You should check (if it is not None) and later use in the callback the same variable.
So I would assign global variables to local ones, then check if the local ones are not None, and if they are not None, you can use them in the callback.
OK
|
GITHUB_ARCHIVE
|
Fix running benchmark
I find that on Windows (at least), if you try to do
$ make benchmark
it crashes out with this:
$ bundle exec ruby -I lib:ext -r fast_polylines ./perf/benchmark.rb
Traceback (most recent call last):
3: from ./perf/benchmark.rb:3:in <main>' 2: from ./perf/benchmark.rb:3:in require'
1: from D:/qxp/bbt_proj/fast-polylines/lib/fast_polylines.rb:3:in <top (required)>' D:/proj/fast-polylines/lib/fast_polylines.rb:3:in require': cannot load such file -- fast_polylines/fast_polylines (LoadError)
From what I see, the Makefile has:
In the Makefile, I see this:
RUBY_FLAG = -I lib:ext -r $(EXT_NAME)
benchmark: ext
bundle exec ruby $(RUBY_FLAG) ./perf/benchmark.rb
This does not have the lib directory on it and so (it appears that) it fails with the above error.
The fix is simple:
RUBY_FLAG = -I lib:ext -I lib -r $(EXT_NAME)
I'm keen to know if this is a problem that you also see or is it just me?
Thanks!
Hi @mohits,
According to ruby's man page:
-I directory Used to tell Ruby where to load the library scripts. Directory path will be added to the load-path variable ($:).
Hence lib:ext is already adding both to the path.
Maybe the ruby CLI has a different behavior on Windows? I guess you could try the next bash command:
$ ruby -I lib:ext <<<"puts $:"
/Users/me/Dev/fast-polylines/lib
/Users/me/Dev/fast-polylines/ext
...
@BuonOmo - It seems to be!
C:\Users\mohit>ruby -I lib:ext -e "puts $:"
C:/Users/mohit/lib:ext
...
Digging into this, I found this from Programming Ruby:
-I directories
' Specifies directories to be prepended to $LOAD_PATH ($:). Multiple -I options may be present, and multiple directories may appear following each -I. Directories are separated by a :'' on Unix-like systems and by a ;'' on DOS/Windows systems.
Of course, when we do the right thing, it works :)
C:\Users\mohit>ruby -I lib;ext -e "puts $:"
C:/Users/mohit/lib
C:/Users/mohit/ext
C:/Ruby22-x64/lib/ruby/site_ruby/2.2.0
Hi @mohits, it's been a while since I wanted to improve the makefile. It is done now, could you tell me if it is now working on windows ? :) (https://github.com/klaxit/fast-polylines/pull/23)
Thank you for doing this. Merging #19 and #23 makes Windows a "first class citizen", i.e., it works out of the box although it won't be tested automatically due to Travis limitations. If you tag me on a pre-release, I'd be happy to help run it locally on my Windows Ruby to make sure that nothing is broken.
@mohits I'm thinking about adding a note about that on the readme. I'll do so once everything is merged. No need for a manual test on pre-release, I'll do a patch and include the fact that windows compilation is not guaranteed since we cannot test it in an automated fashion. Could I add your name in that paragraph to say that you should be mentioned in windows related issue ?
In the meantime, I'll wait for the last fixes to integrate every PRs you made ! Thanks again for the journey you made :)
You're most welcome. I'm sorry I had overlooked the note on the fixes you wanted from me when I cleared out everything on Sunday. Have made the minor changes on #19 and once it goes through Travis, hopefully, everything can be merged and we are good to publish the latest version :) that will work with Windows out of the box.
Thanks for being receptive to the changes.
Hi @BuonOmo - I wanted to write a blog post that examines how a native gem works and wanted to use fast-polylines as the gem under the microscope. The first part is at: https://notepad.onghu.com/2023/learning-by-reversing-s1-e1-native-gems/ but when putting this together, I realised that running make benchmark actually still does not work. It works if you run the benchmark script directly from the windows command line but not from the Makefile.
The reason for this is that the Makefile specifies that SHELL=/bin/sh which does not respect the Windows style -Ilib;ext and on the other hand, the rest of the Windows stuff needs it.
The correct fix is to use -Ilib -Iext so that both paths are passed to it. I will raise a PR for this again but let me know if I should open this as a separate issue or leave it on this one since the title of the issue would likely be similar.
Nice article! I don't have access to my computer these days, but I'll test your PR on Mac as soon as I have my laptop. No need to open a new issue, the PR will be just fine.
I will create the PR in a few days (probably next weekend) and there's no rush to resolve it :)
|
GITHUB_ARCHIVE
|
Woocommerce mySQL Query - List All Orders, Users and Purchased Items
I have a fully working mySQL query which pulls all of the orders, users, addresses and items purchased from Woocommerce, however it only lists the products individually, and I would like to add the quantity for each product displayed.
Currently shows 'Items Ordered'
Running Shoes
Walking Shoes
Where it should show 'Items Ordered'
3 x Running Shoes
4 x Walking Shoes
SELECT
p.ID AS 'Order ID',
p.post_date AS 'Purchase Date',
MAX( CASE WHEN pm.meta_key = '_billing_email' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Email Address',
MAX( CASE WHEN pm.meta_key = '_billing_first_name' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'First Name',
MAX( CASE WHEN pm.meta_key = '_billing_last_name' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Last Name',
MAX( CASE WHEN pm.meta_key = '_billing_address_1' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Address',
MAX( CASE WHEN pm.meta_key = '_billing_city' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'City',
MAX( CASE WHEN pm.meta_key = '_billing_state' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'State',
MAX( CASE WHEN pm.meta_key = '_billing_postcode' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Post Code',
CASE p.post_status
WHEN 'wc-pending' THEN 'Pending Payment'
WHEN 'wc-processing' THEN 'Processing'
WHEN 'wc-on-hold' THEN 'On Hold'
WHEN 'wc-completed' THEN 'Completed'
WHEN 'wc-cancelled' THEN 'Cancelled'
WHEN 'wc-refunded' THEN 'Refunded'
WHEN 'wc-failed' THEN 'Failed'
ELSE 'Unknown'
END AS 'Purchase Status',
MAX( CASE WHEN pm.meta_key = '_order_total' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Order Total',
MAX( CASE WHEN pm.meta_key = '_paid_date' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Paid Date',
( select group_concat( order_item_name separator '</p>' ) FROM wp_woocommerce_order_items where order_id = p.ID ) AS 'Items Ordered'
FROM wp_posts AS p
JOIN wp_postmeta AS pm ON p.ID = pm.post_id
JOIN wp_woocommerce_order_items AS oi ON p.ID = oi.order_id
WHERE post_type = 'shop_order'
GROUP BY p.ID
I believe Woocommerce stores the QTY in the following table / entries:
wp_woocommerce_order_itemmeta
order_item_id
SELECT wp_woocommerce_order_itemmeta.meta_value
WHERE wp_woocommerce_order_itemmeta.meta_value = '_qty' and wp_woocommerce_order_itemmeta.order_item_id =
And I need to join it in somehow into this section:
( select group_concat( order_item_name separator '</p>' ) FROM wp_woocommerce_order_items where order_id = p.ID ) AS 'Items Ordered'
Thanks in advance.
UPDATED WITH ANSWER FROM: Lucek
Thanks Lucek, absolutely perfect.
I've combined the complete query in case anyone else wants to copy it.
SELECT
p.ID AS 'Order ID',
p.post_date AS 'Purchase Date',
MAX( CASE WHEN pm.meta_key = '_billing_email' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Email Address',
MAX( CASE WHEN pm.meta_key = '_billing_first_name' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'First Name',
MAX( CASE WHEN pm.meta_key = '_billing_last_name' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Last Name',
MAX( CASE WHEN pm.meta_key = '_billing_address_1' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Address',
MAX( CASE WHEN pm.meta_key = '_billing_city' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'City',
MAX( CASE WHEN pm.meta_key = '_billing_state' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'State',
MAX( CASE WHEN pm.meta_key = '_billing_postcode' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Post Code',
CASE p.post_status
WHEN 'wc-pending' THEN 'Pending Payment'
WHEN 'wc-processing' THEN 'Processing'
WHEN 'wc-on-hold' THEN 'On Hold'
WHEN 'wc-completed' THEN 'Completed'
WHEN 'wc-cancelled' THEN 'Cancelled'
WHEN 'wc-refunded' THEN 'Refunded'
WHEN 'wc-failed' THEN 'Failed'
ELSE 'Unknown'
END AS 'Purchase Status',
MAX( CASE WHEN pm.meta_key = '_order_total' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Order Total',
MAX( CASE WHEN pm.meta_key = '_paid_date' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'Paid Date',
( SELECT GROUP_CONCAT(CONCAT(m.meta_value, ' x ', i.order_item_name) separator '</br>' )
FROM wp_woocommerce_order_items i
JOIN wp_woocommerce_order_itemmeta m ON i.order_item_id = m.order_item_id AND meta_key = '_qty'
WHERE i.order_id = p.ID AND i.order_item_type = 'line_item') AS 'Items Ordered',
MAX( CASE WHEN pm.meta_key = 'post_excerpt' AND p.ID = pm.post_id THEN pm.meta_value END ) AS 'User Comments'
FROM wp_posts AS p
JOIN wp_postmeta AS pm ON p.ID = pm.post_id
JOIN wp_woocommerce_order_items AS oi ON p.ID = oi.order_id
WHERE post_type = 'shop_order'
GROUP BY p.ID
What a god send! Cheers
You need to replace Items Ordered section with this:
( SELECT GROUP_CONCAT(CONCAT(m.meta_value, ' x ', i.order_item_name) separator '</p>' )
FROM da_woocommerce_order_items i
JOIN da_woocommerce_order_itemmeta m ON i.order_item_id = m.order_item_id AND meta_key = '_qty'
WHERE i.order_id = p.ID AND i.order_item_type = 'line_item') AS 'Items Ordered'
You can change separator between product name and quantity in CONCAT function, now it is ' x '. I also add i.order_item_type = 'line_item' to where clause - it prevents from getting shipping, fees and coupons. If you need it all in your query - just delete it.
Thanks Lucek, absolutely perfect.
Useful. A minor modification could also enable it to create .csv file to be imported by Calc, Excel, etc?
You can generate csv with PHP, when you run this query. Is that you're searching for?
|
STACK_EXCHANGE
|
This topic is the first one of a serie of articles coming soon, concerning a global approach of abap oriented-object in order to capitalize your development in a long term.
To introduce basically, often when we code a custom OO framework, this framework is only used for the current project. What do you think, if your framework can be more global, can grow up without any direct action by you and finally, can be usable for several projects and several clients?
That is why, it could be relevant to know that the creation of some methods dynamically is possible, in order to give more flexibility, more factorization, better evolution and artificial intelligence design of your framework. Let's it begin...
After some investigations resulting in successful tests and obsolete codes found, I would like to share with you regarding my point of view, the best way to improve dynamically a class during the runtime, This snippet is based on the framework used by SAP for the version 740, including security and a minimum of lines of code.
Well, for this example, I will create a CLASS "ZCL_TEST1" that will create a method into the class "ZCL_TEST2" dynamically. The process is below:
1) Call a MF to create a method into the target class
2) Call a MF to include the implementation of the fresh method
3) Call a MF to regenerate the sections of the target class
4) Call the dynamic method of the target class
Here we go... :smile:
1) In SE24, create the class "ZCL_TEST2". This class will see their own methods created through the "ZCL_TEST1".
2) In SE24, create the class "ZCL_TEST1" and create a CONSTRUCTOR to simplify our example. This class will create method into the "ZCL_TEST2" class during the instantiation.
3) In the CONSTRUCTOR, you have to call 3 MF (complete source code in attachment) succinctly:
3.1) Variables initialization
3.2) Call the right MF to create a method in the target class
3.2) Init & Call the right MF to create an implementation into the target class
3.3) Call the right MF to regenerate the sections of the target class (Private,Protected & Public Section)
(source code in attachment)
At this time, your class ZCL_TEST1 is able to interact onto the class ZCL_TEST2.
Finally, execute many times ZCL_TEST1 class locally with F8 and check out the result in ZCL_TEST2.
That is done!
For the next topic, I will enjoy to share with you in which context this kind of development can be useful and how is it exciting to enhance your custom framework including several design patterns, in order to improve dynamically your architectural layers during the runtime.
The final goal will be to decrease your lines code, increase the reusability, be able to call generic methods that do not exist yet during your writing code and at the same time, thinking about how can you grow up your custom framework by adding these generic methods freshly created throughout your miscellanous projects.
|
OPCFW_CODE
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from rest_framework.filters import OrderingFilter, SearchFilter
from rest_framework.generics import (
ListCreateAPIView,
RetrieveUpdateDestroyAPIView,
)
from rest_framework.permissions import IsAuthenticatedOrReadOnly
from rest_framework.response import Response
from rest_framework.views import APIView
from boards.models import Board
from .pagination import BoardPageNumberPagination
from .permissions import IsAdminOrReadOnly
from .serializers import BoardSerializer
class BoardListCreateAPIView(ListCreateAPIView):
"""
View that returns a list of boards & handles the creation of
boards & returns data back.
"""
queryset = Board.objects.all()
serializer_class = BoardSerializer
permission_classes = [IsAuthenticatedOrReadOnly]
pagination_class = BoardPageNumberPagination
filter_backends = [SearchFilter, OrderingFilter]
search_fields = ['title']
class BoardRetrieveUpdateDestroyAPIView(RetrieveUpdateDestroyAPIView):
"""
View that retrieve, update or delete (if user is the admin of) the board.
"""
queryset = Board.objects.all()
serializer_class = BoardSerializer
permission_classes = [IsAuthenticatedOrReadOnly, IsAdminOrReadOnly]
lookup_field = 'slug'
lookup_url_kwarg = 'slug'
class SubscribeBoardView(APIView):
def get(self, request, format=None):
"""
View that subscribe / unsubscribe a board and returns action status.
"""
data = dict()
user = request.user
board_slug = request.GET.get('board_slug')
board = Board.objects.get(slug=board_slug)
user = request.user
if board in user.subscribed_boards.all():
board.subscribers.remove(user)
data['is_subscribed'] = False
else:
board.subscribers.add(user)
data['is_subscribed'] = True
data['total_subscribers'] = board.subscribers.count()
return Response(data)
class GetSubscribedBoards(APIView):
def get(self, request, format=None):
"""Return a list of user subscribed boards."""
boards = request.user.subscribed_boards.all()
boards_list = [{'id': board.id, 'title': board.title} for board in boards]
return Response(boards_list)
class TrendingBoardsList(APIView):
def get(self, request, format=None):
"""Return a list of trending boards."""
boards = Board.objects.all()
trending_boards = sorted(boards, key=lambda instance: instance.recent_posts(), reverse=True)[:5]
trending_boards_list = [{'title': board.title, 'slug': board.slug} for board in trending_boards]
return Response(trending_boards_list)
|
STACK_EDU
|
Ph.D. in Quantitative Psychology
BioI am originally from the greater Cincinnati, Ohio area and grew up in a small town called Madeira.
EducationB.S. in Mathematics from THE Ohio State University. M.S. in Statistics and a Ph.D. in Quantitative Psychology from the University of Illinois. Post-doctoral fellowship through the Intelligence Community, working at the College of Information Sciences and Technology at Penn State University.
I study judgment and decision making under uncertainty. I have found that there are many contexts where observations (scientific or intuitive) provide far less information than is realized because of veiled violations of assumptions. My research approach is to formalize this disconnect by recasting such observations as measurements, and using psychometric theory to identify what assumptions are likely to hold, and highlighting the problematic role of violated assumptions for judgment, inference, and choice. Any investigation of judgment and choice under uncertainty will require robust methodology that can account for such disconnects between observation and theory. Failure to account for this problem will lead to inaccurate inferences from observation, for both researchers and DMs.
Theoretical Work on Global-local Incompatibility
Under uncertainty, we gather observations from the environment to generate judgments. Such everyday judgments can fall prey to a disconnect between theory and observation. I have identified contexts where proximal observations are incompatible with the theoretical target of judgment (e.g., attempting to judge global warming, economic stability, etc.). In such contexts, this global-local incompatibility serves to reduce the amount of information in forming a judgment, while simultaneously increasing perceived confidence in this judgment. My research program uses judgment of uncertain scientific results as a test bed for understanding the role of the environment in judgment (Broomell & Kane, 2017). One arm of this research program has investigated judgments of climate change, linking them to personal experiences (Broomell, Budescu, & Por, 2015; Broomell, Winkles, & Kane, 2017). Another arm has also applied this overall approach to investigating perceptions of tornado danger (Dewitt, Fischhoff, Davis, & Broomell, 2015), an uncertain context where DMs tend to ignore official warnings. More recent work in progress (funded by NOAA), is building further on investigating the cues in the environment that generate perceptions of risk in the public, and whether those cues are valid for indicating tornado danger or not. Together, these projects demonstrate the strong role that local perceptions play when attempting to evaluate large scale variables (such as global warming or natural hazards).
Methodological Work on Model Fitting
I also address the disconnect between theory and observation in my methodological work on choice modeling. For example, I have shown that the experimental stimuli used for modeling can cause serious inferential problems, for both model comparison (Broomell, Budescu, & Por, 2011) and parameter estimation (Broomell & Bhatia, 2014). This is summarized in Broomell, Sloman, Blaha, & Chellen (2019).
Broomell, S. B., Sloman, S., Blaha, L. M., & Chelen, J. (2019). Interpreting Model Comparison Requires Understanding Model-Stimulus Relationships. Computational Brain & Behavior.
Dewitt, B., Fischhoff, B., Davis, A. L., Broomell, S. B., Roberts, M., & Hanmer, J. (2019). Exclusion criteria as measurements I: Identifying invalid responses. Medical Decision Making.
Dewitt, B., Fischhoff, B., Davis, A. L., Broomell, S. B., Roberts, M., & Hanmer, J. (2019). Exclusion criteria as measurements II: Effects on utility and health policy implications. Medical Decision Making.
Fischhoff, B. & Broomell, S. B. (In Press). Judgment and Decision Making. Annual Review of Psychology.
Broomell, S. B., Winkles, J.F, & Kane, P. B. (2017). The Perception of Daily Temperatures as Evidence of Climate Change. Weather, Climate, and Society, 9, 563-574.
Broomell, S. B. & Kane, P. B. (2017). Public Perception and Communication of Scientific Uncertainty. Journal of Experimental Psychology: General, 146(2), 286-304.
Broomell, S. B., Budescu, D. V., & Por, H. H. (2015). Personal experience with climate change predicts intentions to act. Global Environmental Change, 32, 67-73. DOI: 10.1016/j.gloenvcha.2015.03.001.
Dewitt, B., Fischhoff, B., Davis, A., & Broomell, S. B. (2015). Environmental risk perception from visual cues: Caution and sensitivity in evaluating tornado risks. Environmental Research Letters, 10(12), 124009.
Broomell, S. B. & Bhatia, S. (2014). Parameter Recovery for Decision Modeling Using Choice Data. Decision, 1, 252-274.
Broomell, S. B., Budescu, D. V., & Por, H. (2011). Pair-wise Comparisons of Multiple Models. Judgment and Decision Making, 6, 821-831.
Broomell, S. B., & Budescu, D. V. (2009). Why are experts correlated? Decomposing correlations between judges. Psychometrika, 74 (3), 531-553.
|
OPCFW_CODE
|
#python
'''
Retesting old usearch functions
Not currently in use.
'''
import os
import logging
"""
Inputs:
fna_filepath: (str) Filepath to file with nucleotide sequence of genome.
output_filename: (str) name of file to write out to.
usearch_path: (str) Filepath to usearch executable file.
mincodons: (str) Number representing the min length for protein analysis.
Outputs:
response: (str) Represents success of function.
"""
def usearch_fast_x(fna_filepath, output_filename, usearch_path, mincodons):
os.chmod(usearch_path, 0o777)
response = os.system(usearch_path + " -fastx_findorfs " + fna_filepath + " -aaout " + output_filename + " -orfstyle 7 -mincodons " + mincodons )
logging.debug(response)
return response
def usearch_tester(usearch_path):
#Checking problem with executable:
logging.debug(os.system("file " + usearch_path))
logging.debug(os.system("uname -m"))
def usearch_blast():
return 0
|
STACK_EDU
|
Who: Zach Oxendine, age 26
What: Service Engineer and Tribal Camp Director
Where: Reston, Va.
You had what could be called an unconventional childhood. Both of my parents are deaf, and so I grew up in a deaf household and deaf culture as well as Native American culture. I’m from the Lumbee tribe, and I grew up on land my grandfather purchased in Rock Hill, SC, about a five-minute drive from the reservation. When you’re in a deaf household with seven brothers and sisters, there’s not a lot of time spent on school; there’s more time spent helping your parents and your little sisters. I was one of those kids who were in honors classes, did great on the standardized testing but probably didn’t turn in most of their homework. But I think the kinds of skills I learned growing up in a household like mine actually helped me handle more organizational, more corporate, more military-style tasks.
How did you chart a path forward? I struggled with my GPA, and any college I would have gotten into would have cost an arm and a leg. So I joined the Air Force right out of high school, graduated from Tech School in the spring and went to my first base the same year as a cyber-systems operator, which translates into cybersecurity server administration and network administration.
How did you land your job at Microsoft? Really by persistence. While I was in the Air Force, we worked alongside many defense contractors and companies such as Cisco and Microsoft. I went to a cookout, and I met a gentleman playing cornhole who was a boss at Microsoft. He said I’d be a great fit and made me believe that maybe this is a place I could work one day. So I transitioned out of active duty, studied a semester at the University of South Carolina, and applied for a couple of Microsoft jobs that I didn’t get. Six, seven, eight months go by, and then I saw a Microsoft position posted on LinkedIn for a service engineer in Reston, Va, I landed that job and quickly discovered that Microsoft has a thriving community of Native American employees and Native American allies— and not only that, it has a lot of people interested in giving back.
You’re a camp director, too? I was able to put together a STEM camp for indigenous kids that includes high school and college students from the Lumbee and a few other tribes, as well as government representatives from the Lumbee and the Catawba tribes. We have sponsors and hosts in the Washington, DC, region representing the tech industry and academia. We are being hosted at the Department of State by the first ever Native American woman to be its director of diversity and inclusion. So these kids are going to hear from many different Native American employees and other allies as well from these companies, learn about their jobs, and learn about their stories. If we can have more and more people find their way into better economic standing in our communities, we can find our place in this country where we can be self-sustainable.
Are you working with the deaf community? Yes. There’s an organization based in the DC area called Deaf in Government run by deaf people that helps them get jobs and opportunities in America. I’m connected with my friends there, helping them out in any way I can. Growing up in a deaf household has its challenges—but it also has benefits, because you have a great sense of humor. Deaf people are great storytellers. They’re very light. They take life as it comes.
|
OPCFW_CODE
|
#include "../eosio.token.hpp"
#include <eosio/asset.hpp>
#include <eosio/eosio.hpp>
#include <eosio/system.hpp>
using namespace eosio;
using namespace std;
static constexpr extended_symbol EOS_SYMBOL =
extended_symbol(symbol("EOS", 4), name("eosio.token"));
class [[eosio::contract("attacker")]] attacker : public contract {
public:
using contract::contract;
attacker(eosio::name receiver, eosio::name code,
eosio::datastream<const char *> ds)
: contract(receiver, code, ds) {}
TABLE balance {
asset eos_balance;
uint64_t primary_key() const { return 0; }
};
typedef eosio::multi_index<"balance"_n, balance> balance_t;
balance_t balance_table = balance_t(get_self(), get_self().value);
ACTION attack() {
// 1. store current balance
asset current_balance = token::get_balance(
EOS_SYMBOL.get_contract(), get_self(), EOS_SYMBOL.get_symbol().code());
balance_table.emplace(get_self(),
[&](auto &x) { x.eos_balance = current_balance; });
// 2. do the bet which resolves in same action
token::transfer_action transfer_act(
EOS_SYMBOL.get_contract(),
permission_level{get_self(), name("active")});
transfer_act.send(get_self(), name("vulnerable"), current_balance, "");
// 3. check if we won after the bet resolved (this inline action is executed
// after the bet)
checkwin_action checkwin_act(get_self(),
permission_level{get_self(), name("active")});
checkwin_act.send();
}
ACTION checkwin() {
// 4. revert whole transaction if we lost
auto current_balance = token::get_balance(
EOS_SYMBOL.get_contract(), get_self(), EOS_SYMBOL.get_symbol().code());
auto previous_balance_itr = balance_table.require_find(0, "prev balance not found");
print(previous_balance_itr->eos_balance, "\n");
print(current_balance, "\n");
check(previous_balance_itr->eos_balance.amount < current_balance.amount, "would have lost");
balance_table.erase(previous_balance_itr);
}
using checkwin_action =
eosio::action_wrapper<"checkwin"_n, &attacker::checkwin>;
};
|
STACK_EDU
|
Table of Contents
2. Using the command line
When you log in via SSH, you are presented with the command line interface, by means of the shell. The current default is Bash. Bash stands for “Bourne Again SHell”, and is one (but the most commonly used by far) of many shells available for Linux. A shell is what interprets what you type into the prompt and makes things happen - different shells do things differently. Your default shell is bash, although you can change it if you wish, but this document will only give you a brief introduction to using the shell to run programs and use the filesystem.
The first thing you should notice is that your shell puts something at the start of every line. This is called the prompt and it looks something like this:
The default prompt tells us our username (
tcmal), the host we're on (
sontaran), and the path we're at (
~, which is an alias for your home directory -
We can enter commands after the prompt, hit enter, and once they're done executing we'll get another prompt back.
tcmal@sontaran:~$ whoami tcmal tcmal@sontaran:~$ hostname -f sontaran.tardisproject.uk tcmal@sontaran:~$ pwd /home/tcmal
While you're at your prompt, you can also use the arrow keys to scroll back up your command history, and Ctrl+R to search through it.
As mentioned above, our current directory is shown at the start of our prompt. You'll almost always start off in
We can use
cd to change the directory we're in. It takes the name of the directory we want to change to, with two special cases -
. always means stay in the same directory and
.. means go one level up.
tcmal@sontaran:~$ cd .. tcmal@sontaran:/home$ cd tcmal tcmal@sontaran:~$
This is more interesting if we make some directories, so let's make a few test ones using
tcmal@sontaran:~$ mkdir test tcmal@sontaran:~$ cd test tcmal@sontaran:~/test$ mkdir test2 tcmal@sontaran:~/test$ cd test2/ tcmal@sontaran:~/test/test2$ pwd /home/tcmal/test/test2 tcmal@sontaran:~/test/test2$ cd .. tcmal@sontaran:~/test$
pwd outputs the full path to whatever directory we're currently in. This path is absolute because it starts with a
/ - the filesystem root.
We can use
ls to see what's in our directory.
tcmal@sontaran:~/test$ mkdir test3 tcmal@sontaran:~/test$ ls test2 test3 tcmal@sontaran:~/test$ ls -al total 16 drwxr-xr-x 4 tcmal 1004 4096 Oct 26 22:49 . drwxr-xr-x 7 tcmal 1004 4096 Oct 26 22:49 .. drwxr-xr-x 2 tcmal 1004 4096 Oct 26 22:49 test2 drwxr-xr-x 2 tcmal 1004 4096 Oct 26 22:49 test3
As well as positional arguments (like
cd test), most Linux commands will also accept flags - starting with either a single
- or two.
ls -al we've passed 2 short flags -
l. These correspond to 'all' and 'long', and give us the long printout with the 'hidden' directories
... Because they're short flags, we only type one
-, although we could type
ls -a -l
Now that we know how to make directories, lets clean up after ourselves. Replace
<username> with whatever your username is.
tcmal@sontaran:~/test$ cd /home/<username> tcmal@sontaran:~$ rm --recursive test tcmal@sontaran:~$
Note that we gave
cd an absolute path - this is totally fine. We also could have said
cd ~ - bash would have replaced the
~ with the same thing before it called cd.
When we call
rm, this time we pass it the long flag
recursive. Unlike short flags, you need to pass each long flag with a seperate '–'.
The recursive flag is needed when you're deleting directories - like most long flags there is a corresponding short flag, so we could have said
rm -r test
Learning how to use programs
The best way to learn how a program works and how to use it is by reading the man page (manual page) built into the system about that program. To find out what a program does and how to use it, simply type man [program name]. Try it now -
This takes us away from our shell and into a pager. You can scroll up and down using up and down arrows, page up and page down, and space and return. Don't worry if you don't understand everything written there, you're not expected to memorize all of a program's options - that's why they're so easily accessible in man pages! Press q to quit the man page reader.
Finding a program for your purpose
If you know what sort of program you want to run, but aren't sure of the name (or if such a program exists), you can use apropos to search for a program by function. For instance, say we want to find an IRC client but we don't know the names of any. We type:
But we get quite a lot of matches, most of which are no use to us. This is because the search has turned up a load of results where “irc” was part of another word, such as “circular”. If we have a quick look at man apropos we find out that the -e flag searches for exact matches:
apropos -e irc
Yay, we've narrowed our matches down to what we wanted! Alternatively we could have tried
apropos “irc client” which would have yielded the same results. However, typing
apropos irc client without the quotes would have returned twice the unwanted results, as it would have searched for both “irc” and “client” and given you results for either. The quotes tell Bash to treat what you put in them as one continuous string.
Now go to the next tutorial to learn about how to create a web page that is automatically hosted by Tardis, on your own subdomain.
|
OPCFW_CODE
|
A computer community is telecommunications network which allows computers to exchange information. Despite the fact that networking is not really strictly a website of computer programming but it is an important sub area of computer science normally. A pupil seeking networking assignment help commonly finds troubles in the next space:
You will find there's crystal clear disconnect in between the computer science field along with the concept ladies receive regarding their power to achieve tech corporations. This guidebook examines the record powering this disparity and how educators, mom and dad, employers and computer researchers can reverse the trend.
By fostering an interest in scientific topics at an early age or working to eliminate detrimental connotations and barriers, educators and oldsters can operate collectively to help girls keep self confidence and curiosity in STEM topics. For gurus currently in the field, Girls can offer you to get role models and mentors, though Adult men will take a stand towards sexist or prejudiced actions from the office.
Our essayists remain knowledgeable pertaining to full latest advancements in the sector of Computer software drills and also the schemas. We provide a number of assurances too. We are able to promise the most beneficial evaluations relying upon the courses of situations.
Learners use our solutions to check from and to match to their unique work. We want our college students to increase their know-how and comprehension of different matters.
Nearly all of our tutors hold Sophisticated degrees within their fields. A lot of keep Ph.D.'s or the equal. All tutor applicants should provide educational transcripts for each diploma they hold, and are analyzed and screened carefully by our workers.
Our administration begins proper from a initially inquiry and goes up right up until in which you wouldn't figure out how to clarify issues on your own. To guide buy e-mail us at email@example.com Or Pay a visit to our Web page at
I was not able to find any reference source for my assignment, And that i acquired stressed on account of it. I took help from them right after looking through the evaluations. They are really seriously astounding! I’ll absolutely return to consider other writing expert services!
Knowledge Mining and Information Analytic Information Mining is really a process of examining raw data and turning into an beneficial information – information and facts that can be made use of to improve income, reducing Price tag, comprehension the customer desires etc. Info Mining software is probably the analytical Device used to traverse and analyse the information.
College students pursuing their diploma in computer science in the universities in Canada have to put in writing insightful assignments on arduous concepts which include coding, computational complexity idea, visual basic, BASH etcetera in order to secure significant grades.
A solution should be prepared both by making ready a flowchart or by writing a pseudo code. Preferably, a programmer should do each.
Evaluation 1 – Scripting for Method Automation COMP9053 This assessment will take a look at your understanding of the command line applications, your capability to Incorporate them to solve difficulties, your power to produce shell scripts along with your capacity to function from a large-stage dilemma assertion to a concrete Answer by using shell scripting.
We achieve our mission to create graduates effective at having leadership positions from the fields you could look here of electrical engineering and computer science and further than.
We offer prime top quality computer science assignment help provider at an affordable level. We are able to providing the ideal value while in the marketplace on account of the following factors:
|
OPCFW_CODE
|
At Ezoic, we're building a better internet experience with our all-in-one digital publishing platform. We are looking for motivated, fast-learning developers who are excited to build products that scale to millions of visitors every day.
At its core, our product uses artificial intelligence to learn how users interact with digital content and proactively delivers the optimal front end for every visitor to our customers' sites. We've built tools to automatically generate progressive web apps, in-depth reporting dashboards to give publishers novel insights into their users, data pipelines to process hundreds of millions of events, and tools to vastly increase throughput and effectiveness of our machine learning decisions, just to name a few. We're continuously growing, and we're looking for frontend and backend developers who love building and solving problems across the entire tech stack.
You must be a tech-savvy professional who enjoys learning about new technologies and solving problems. This is a fast-moving engineering role where you will be able to work on core products and develop designs and functionality from scratch. You must enjoy collaborating with other smart engineers, creating scalable solutions to improve publishers' digital properties.
We are hiring engineers across the entire stack. If you want to focus on backend, frontend, or full stack, we have a position that will allow you to work on the type of code that you are passionate about. Our flexibility also allows you to move between roles as your interests change or as new features are being developed.
- Big Data: It's the backbone of everything we do. We use Redshift and MySQL to store just about everything you could imagine about a visitor's interactions with digital content.
- Machine Learning: We use tensorflow, scikit-learn, Python, Golang, and state-of-the-art tools we've developed in-house to make data-driven decisions about the kinds of ads, content, and layouts to show each visitor.
- Web Development: We build customer-facing dashboards, apps, and internal tools using whatever framework is best for the job, whether it's Vue.js, Angular, JS, Go or PHP. Plus, with a wide variety of digital publishers, you'll get experience working with a diverse set of web development technologies.
- Cloud Computing: Our stack is built primarily upon AWS (EC2, Dynamo, RDS, Kinesis, Lambda, etc.), along with other tools such as Kubernetes, Docker, and Apache Spark.
- B.S. Computer Science or related (In lieu of degree, 4 years of relevant work experience)
- More experienced engineers are encouraged. Compensation increases with experience.
- Experience with at least one of: Go / Swift / Java / Python / or PHP
- Web application development experience is a plus.
- A drive to solve problems
Ezoic is an inaugural member of the Google Certified Publishing Partner Program. This means we work closely with Google to help websites optimize their ad revenue. We actually have won awards directly from Google for the artificial intelligence that we've built. Cool, right?!?
Ezoic is a technology-first company. Our CEO is a developer by trade, and our engineers are given all the tools needed to succeed. We empower our engineers to make a lot of their own decisions about both design and implementation, and encourage them to come up with new ideas to improve the application. We also aim to keep a good work-life balance, based on our philosophy that happy engineers are more productive. We want you to be working on something that you are passionate about and allow engineers to move around to keep things fresh. We accomplish this via flexible schedules, unlimited time off, and all employees are welcomed to take 20 hours of paid volunteer time off. Other benefits include a 401k matching plan, great health insurance, and stock options!
Currently most engineers work remotely temporarily due to COVID-19, but we are slowly reopening our office for those engineers who want to work from there. We have a brand new office in Carlsbad, CA, where we plan to return to soon, while still allowing work from home. This means we are only looking for engineers who are willing to relocate to the Carlsbad, CA area, but we will be flexible on timing based on how the pandemic goes. Every engineer works at their own quad-monitor setup on a standing desk. We pride ourselves on getting stuff done, but when you need a break, Ezoic offers great perks, including unlimited vacation time, catered lunches, snacks, flexible hours, ping pong, video games, and pool.
|
OPCFW_CODE
|
For whatever reason I started blogging again last week. Not knowing why isn’t due to a lack of introspection on my part.
Maybe the nauseating weight of the Trump administration was suppressing my desire to write for the previous three-and-a-half years? Or maybe I’m just arbitrary and lazy?
It’s also unclear how long I can keep this up. Inspiration and a willingness to type are not something which you can purchase online or install with a package manager. I suppose we’ll find out.
However, the mechanics of blogging again are simpler to understand.
For one thing, as I write here:
Technically, it’s all created using software. I don’t actually type all that markup manually, like some filthy animal.
And since the site remained unchanged from the time I generated it during June of 2017, it was still working fine as of last week. Keep that in mind when you consider the architecture for your own blog. Once you’ve created it, static HTML is pretty much maintenance free.
However, there’s that whole problem of generating it again. With new content. Yeah.
I had all the publishing software, content, configuration, etc. installed on my Mac originally. But since we all know I’m using a Windows PC now, I had to migrate everything.
That meant just copying my blog posts since they’re simply Markdown documents with YAML frontmatter. Easy.
But my content management system is Nanoc, a Ruby-based generator. And while it’s reasonably cross-platform and mostly runs on Windows, it’s not officially supported there. More importantly, the scripts and other tools I built on top of Nanoc were kinda Unix-adjacent, if you know what I mean.
This is where the Windows Subsystem for Linux (WSL) came to the rescue.
Normally, I use the Windows-specific version of
ruby.exe for my other projects. But with WSL, you really need to
apt-get ruby and shove that baby into Ubuntu as well.
After that it was just a
gem install of
kramdown, my Markdown parser of choice. At least, I thought that’s all I needed.
Turns out the
kramdown-parser-gfm Gem is required too since I depend on GitHub-flavored Markdown and the kramdown developers removed support for it from the main project back in 2019. Surprise, surprise. But that’s what I get for not parsing any Markdown for so damn long.
By the way, for any of you also installing Ruby Gems in WSL or other Unix-like environments, don’t preface
gem install with
sudo. This is both unnecessary and unwise.
It’s unnecessary because you can simply append
--user-install to those installation commands. This will place them in
~/.gem, your local Gem directory.
And it’s unwise because you don’t want them placed in your system-wide Gem directory. Doing so will delete, overwrite or otherwise fuck them up whenever you update
Of course, you’ll need to add that local Gem directory to your
$PATH variable in
~/.bash_profile or whatever the equivalent is for your shell. Otherwise the shell can’t find those Gems. Duh.
Here’s an example
~/.bash_profile showing how to do just that:
if [ -f ~/.bashrc ]; then
Obviously the version of
ruby in that path will need to be adjusted if yours if different.
So after getting the correct Gems installed in the correct places, I then had to make a few changes to my Nanoc configuration files and various homebuilt Unix-y scripts. These were mostly just converting some hard-coded macOS-specific directory names to their Windows-specific equivalents.
And then… it all worked. Flawlessly.
Which means migration was not really much of a problem at all. Sure, thinking ahead on what I needed to do took awhile, but that actual typing necessary to make it happen was just a matter of minutes.
Kind of anticlimactic, really.
Of course, now I have to figure out what to write. Dammit.
|
OPCFW_CODE
|
The Khronos Group finalized the specifications for the OpenCL 2.2 standard that allows developers to handle compute-heavy tasks by leveraging a system's CPU and GPU together. Further, in an effort to encourage more contributions to the standard, the Khronos Group released the full specs and conformance tests on GitHub for the first time. That access to the standard's inner workings should allow devs to see how it might be improved.
OpenCL is supported by everyone from Nvidia and AMD to Apple and Intel. Each company explains the standard in its own way--Nvidia described it as a "low-level API for heterogeneous computing that runs on CUDA-powered GPUs"--but the most basic overview comes from Apple, which said devs can "use OpenCL to incorporate advanced numerical and data analytics features, perform cutting-edge image and media processing, and deliver accurate physics and AI simulation in games." That flexibility, combined with the rise of dedicated GPUs, makes OpenCL quite popular in many apps.
Here's what the Khronos Group said about version 2.2 of the standard:
By finalizing OpenCL 2.2, Khronos has delivered on its promise to make C++ a first-class kernel language in the OpenCL standard,” said Neil Trevett, OpenCL chair and Khronos president. “The OpenCL working group is now free to continue its work with SYCL, to converge the power of single source parallel C++ programming with standard ISO C++, and to explore new markets and opportunities for OpenCL — such as embedded vision and inferencing. We are also working to converge with, and leverage, the Khronos Vulkan API — merging advance graphics and compute into a single API.
The last section of that statement might be the most interesting. Embedded vision and inferencing--which is when AI applies things it's learned from one data set to another--have become increasingly important to AI-focused companies. (Which is at this point seemingly every major tech company.) Nvidia and Google have engaged in a public battle to say they've developed the best inferencing hardware, and Nvidia's Volta GPU architecture debuted with the Tesla V100, the standout feature of which is a "Tensor Core" meant to help it outperform its predecessors with deep learning applications.
Now it seems the Khronos Group wants OpenCL to help these companies reach their machine learning goals. The group's note about OpenCL converging with the Vulkan API is also interesting. Vulkan is a low-level graphics API that debuted in 2015 and has since been added to game engines like Unity, popular titles like Doom, and everything from the Nintendo Switch console to the latest graphics drivers and utilities from Nvidia and AMD. The Khronos Group said at GDC 2017 that Vulkan's only expected to become more popular as other companies rush to support the API.
Merging OpenCL and Vulkan could make it easier for devs to create even better-performing apps or games. We'll have to keep an eye out as they come closer to convergence. In the meantime, you can check out pretty much everything you need to know about OpenCL 2.2 on GitHub.
|
OPCFW_CODE
|
'use strict';
const client = require('../client');
const recalculatePlayerStatistics = require('../coreFunctions/recalculatePlayerStatistics');
const callRecalculate = () => {
console.log('Recalculating players statistics');
recalculatePlayerStatistics();
};
const deleteMergingPlayer = (playerTwoName) => {
return client.query('DELETE FROM players WHERE name = $1;', [playerTwoName])
.catch((error) => {
throw error;
});
};
const updateLoserNameInSets = (playerOneName, playerTwoName) => {
return client.query('UPDATE sets SET loser_name = $1 WHERE loser_name = $2;', [playerOneName, playerTwoName])
.catch((error) => {
throw error;
});
};
const updateWinnerNameInSets = (playerOneName, playerTwoName) => {
return client.query('UPDATE sets SET winner_name = $1 WHERE winner_name = $2;', [playerOneName, playerTwoName])
.catch((error) => {
throw error;
});
};
const combineResults = (playerOneName, playerTwoName, resolve) => {
console.log(`Merging results of ${playerTwoName} into ${playerOneName}`);
updateWinnerNameInSets(playerOneName, playerTwoName)
.then(() => {
updateLoserNameInSets(playerOneName, playerTwoName)
.then(() => {
deleteMergingPlayer(playerTwoName)
.then(() => {
resolve();
callRecalculate();
})
.catch((error) => {
throw error;
});
})
.catch((error) => {
throw error;
});
})
.catch((error) => {
throw error;
});
};
module.exports = combineResults;
|
STACK_EDU
|
from docx import Document
from docx.enum.text import WD_LINE_SPACING
from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
from sanitize_filename import sanitize
from pathlib import Path
from services.exporter_interface import ExporterInterface
class WordExporter(ExporterInterface):
def export(self, retrieved_annotations: dict, directory: str) -> None:
for author in retrieved_annotations:
author_directory = "{}/{}/".format(directory, sanitize(author))
Path(author_directory).mkdir(parents=True, exist_ok=True)
books = retrieved_annotations[author]
for book in books:
book_file_name = "{}/{}.docx".format(author_directory, sanitize(book))
document = Document()
document.add_heading(book, level=1)
document.add_paragraph(author, style='Caption', )
p_blank = document.add_paragraph("")
p_blank.line_spacing_rule = WD_LINE_SPACING.DOUBLE
chapters = books[book]
for chapter in chapters:
document.add_paragraph(chapter, style='Title')
p_blank = document.add_paragraph("")
p_blank.line_spacing_rule = WD_LINE_SPACING.DOUBLE
annotations = chapters[chapter]
for annotation in annotations:
comment = ''
if annotation.comment is not None and annotation.comment:
comment = annotation.comment
p_annotation = document.add_paragraph(annotation.text, style='Intense Quote')
p_annotation.alignment = WD_PARAGRAPH_ALIGNMENT.JUSTIFY
p_comment = document.add_paragraph(comment, style='No Spacing')
p_comment.alignment = WD_PARAGRAPH_ALIGNMENT.JUSTIFY
p_blank = document.add_paragraph("")
p_blank.line_spacing_rule = WD_LINE_SPACING.DOUBLE
document.save(book_file_name)
|
STACK_EDU
|
Turbulence in the earth's atmosphere degrades the true object intensity distribution of astronomical sources. The thermal gradients in the air produce random phase delays in the wavefront that cause blurring of images.
Usually all images which are exposed for several time scales of the atmospheric turbulence are classified as long-exposure images. As a general rule of thumb, the exposure times in excess of a few hundredths of a second are considered as long exposure images. In long-exposure images the high spatial frequency information is attenuated because the recorded image is the source convolved with the time average of the point spread function (psf).
A straightforward method to measure the atmospheric psf is to measure the size of the intensity profile of an unresolved source close to the object under study. Here we assume that the medium through which the imaging is done behaves in the same way for both the object under study and the point source. If one has to get the true point spread function then the point source and the object under study should be within an isoplanatic patch.
For the sun, we do not have access to a point source for comparison. Furthermore, for extended sources like the sun, the atmospheric point spread function will not be the same on all parts of the image. We have a problem of an image for which each part of the object has been convolved with different point spread functions. Hence a single point spread function will not be a correct characterisation of the point spread function for an extended object.
Another technique (Collados 1987) of solar image reconstruction uses the limb of the moon in the photographs taken during partial solar eclipse. In the absence of earth's atmosphere the moon's limb would be seen as a sharp edge against the bright Sun's surface. When imaged using a ground based telescope, the moon's limb is blurred because of the atmospheric point spread function. The gradient of the blurred limb profile of the moon gives the point spread function of the telescope and atmosphere. The point spread function thus found is used for deconvolving the point spread function from the entire image. This point spread function can be used to remove blurring only near the limb of the moon and within the isoplanatic patch which encompasses the moon's limb. Use of this point spread function for deconvolution elsewhere in the image will not give true reconstruction.
Night time observers can have single stars for deconvolution. To get a reconstruction which is close to the true object intensity distribution, the star used for determining the point spread function of the atmosphere and the object under study have to be within the same isoplanatic patch. In the case of photometry of extended objects like clusters of stars, algorithms like Daophot are used (Stetson 1987) where nonisoplanaticity effects are not considered.
The conventional method is to make a gaussian fit to these observed profile and the full width at half maximum of the fitted gaussian is used to characterise the point spread function. This creates spurious features if the true point spread function is not a gaussian. In fact, there is theoretical and experimental evidence for the non gaussian nature of the atmospheric psf (Roddier 1980).
We propose a method of estimating the point spread function at any arbitrary part of an extended image based on a parameter search. We assume a class of convolving kernels involving one or two parameters and look for the number of zeros and negative pixel values in the reconstruction as a function of the parameters. We show that it is possible to retrieve the unknown parameters of the kernel. The technique proposed here has been rigourously tested on simulations and also on real images.
|
OPCFW_CODE
|
Installing Sailfish OS on Motorola Moto G 2013 (falcon)
These are the instructions are for the Motorola Moto G 2013. It has one of the most complete community ports of Sailfish and virtually all of the hardware is usable. The phone can be bought for around C$100 and is powerful enough to run the OS. As this is a community port, it does not have some of the proprietary bits bits that can be found on official Sailfish phones such as the Jolla phones. The most obvious of which is the Android run-time. So you will not be able to run Android apps after installing the Sailfish OS.
Keep in mind that this process will delete all data on your phone as well as void warranty!
Obtaining the files
To install Sailfish you need the following files:
You should be able to see your device now by typing:
Unlocking the bootloader
The bootloader needs to be unlocked so that alternative OSes can be installed. This unlocking is what voids your warranty. It cannot be undone.
You will need a Motorola account for this to register your phone and obtain the unlocking code. Go to Motorola Support
now to get your unlock code. You will also get the instructions there to unlock the bootloader, but in brief, the steps are: get the unlock key from your phone, paste that code in Motorola's form, agree to the licence and voiding of warranty, retrieve the unlock code by e-mail, and unlock the phone bootloader.
Now that your phone is free, you can install TWRP which is required to install both CyanogenMod and Sailfish OS. First get your phone in fastboot mode by typing
adb reboot bootloader
Then install the img file you just downloaded using fastboot by typing:
fastboot flash recovery twrp-3.1.0-0-falcon.img
Version shouldn't really matter, just use what you downloaded. After flashing has completed, reboot by typing
The phone should now boot into TWRP. Alternatively, you can power down the phone and next hold the power button and volume down button for 2 to 3 seconds and let go.
To install CyanogenMod 12.1, you may need to upgrade your bootloader as 12.1 requires are least version 41.18. You should be good if you upgraded the stock Android on your phone before you unlocked it. Otherwise, you can manually upgrade the bootloader. You can flash it by typing
fastboot flash boot motoboot_411A_xt1032.img
Afterwards, reboot your phone again by typing
Then copy CyanogenMod over by copying the zip file over to your phone by typing
adb push cm-12.1-20160418-UNOFFICIAL-falcon.zip /sdcard
Then in TWRP on your phone, first wipe everything by tapping "Wipe" and then swipe to wipe. After that, click on "Install" and select the file "cm-12.1-20160418-UNOFFICIAL-falcon.zip" you just copied over.
Installing Sailfish OS
The procedure is the same as for CyanogenMod. So copy over the Sailfish OS file by typing
adb push sailfishos-falcon-release-18.104.22.168-pgz21.zip /sdcard
Then in TWRP, tap "Install" select "sailfishos-falcon-release-22.214.171.124-pgz21.zip". You can now reboot the phone and it should boot into Sailfish OS.
Fixing some stuff
GPS may not work if you have never used GPS before installing Sailfish. This seems to be due to CyanogenMod somehow. The solution seems to be to use the GPS in the original firmware of the Moto G 2013 to get a satellite lock. Once you have a lock, install CyanogenMod and then Sailfish.
The camera app is a bit wonky. To get it working, start the app and wait until you get the not-responding message. Close the app and start it again. The camera should just work now. You will need to do this after every reboot of the phone.
To get pkcon to work, you need to disable the adaption0 repository. To do so, open the Terminal app and type
ssu dr adaptation0
And now for zypper:
devel-su zypper mr -d adaptation0
devel-su zypper ref -f
The devel-su is needed because zypper requires root access. The password can be found and set under Settings -> Developer Mode.
Browser crashes when playing video
You need to install a missing library. To fix this, go to Settings, Untrusted software, and enable the checkbox for "Allow Untrusted software". Then download the missing library here
. You can install it simply by opening the file. You can disable the check box for untrusted software now. After restarting your browser, you should be able to play videos.
When playing videos in the browser, the volume button does not change the volume for the browser, but rather changes the ring tone volume. You can change the volume buttons to always refer to media and not ring tone by opening the Terminal app and typing
dconf write /jolla/sound/force_mediavolume true
Now the volume buttons will always change the media volume. Use the ambiances to control ring tone volume instead.
If you have any CA certificates you like to install, then become root and copy them to "/etc/pki/ca-trust/source/anchors/". Then run the following command:
You will need to reboot afterwards.
CalDAV and CardDAV with ownCloud
To get calendar sharing to work with ownCloud, go to Setting, Accounts, Add account. Then choose "CalDAV and CardDAV". Fill in your username, password. As the server address, put in the full ownCloud calendar URL, the is: "https://example.com/owncloud/remote.php/caldav" or wherever you installed owncloud on your server. Then untick the CardDAV check box.
To get also get contacts sharing to work, make a new account. The URL is now "https://example.com/owncloud/remote.php/carddav" and you need to uncheck the box for CalDAV.
To get Eduroam to work with PEAP-MSCHAPv2, you need to save the following in "/var/lib/connman/wifi_eduroam.config":
Identity=your university e-mail here
Passphrase=your password here
|
OPCFW_CODE
|
Lookout turned to First Person to overhaul its enterprise website. The goal was to shift from a SASE product company to a holistic Data-Centric Security platform, recognized as a Gartner Visionary.
My team consisted of a Creative Director, two Senior Art Directors, and me. I also collaborated with developers, UX writers, project managers, and stakeholders.
I was responsible for:
Building and managing the Figma design system
Building a component library for the Narrative team
I had an open policy regarding the design library. Art Directors were encouraged to use styles and components in their mockups. They were also free to edit the design library as needed and would let me know about changes during dailies.
The Creative Director and I met with the web team each week to present updates and get feedback. Prior to each meeting, I would review and update the documentation, and provide a list of changes.
I created a component library for the narrative team and made updates based on their feedback. I also introduced them to Figma features that streamlined their work, such as autolayout and component properties.
Lookout used Figma extensively, so their stakeholders were already comfortable using Figma comments for feedback. We created pages in each design file that were set up for stakeholder review. This experience gave me some ideas for a better feedback and retention workflow.
For two months, I conducted a weekly Figma training series with the client's web team.
Initially, this was supposed to be a two-hour introduction to the design library and deliverables. The scope expanded to include fundamentals such as styles, components, and shared libraries.
Our Narrative team conducted a Discovery Workshop to align all parties towards a unified structure for Lookout's B2B and B2C services. I helped with setup and support, while gaining new insights into our discovery process.
The Narrative team needed a new approach to defining structure. They had been using a wireframe-like structure, which was often confused with actual wireframes. They were also using Miro; I recommended that they switch to Figma.
I created a small library of component cells and section blocks. Component properties made it easy to show and hide settings and options within each cell. Autolayout organized cells within each block and stacked blocks into pages.
Ultimately, the Narrative team used over 2000 narrative blocks for Lookout's website.
One early challenge was the limitations of the new brand colors. Lookout Green was not accessible as body text, and Lookout Lime was illegible on a light background.
I generated a series of color studies. They illustrated the issues we faced, but helped us discover a few promising color combinations. From there, I produced a set of alternate AA web colors and six more neutral colors.
Another challenge we faced was determining a responsive grid. The brand guidelines and initial web page designs were static, edge-to-edge and corner-to-corner layouts.
I took the initiative to study the original brand grids and presented a variety of solutions, such as using variable-size text and layouts for the primary desktop layout.
I tend to build modular Figma projects using multiple shared libraries and layout files. This approach offers flexibility for future scaling, while adhering to a single source of truth.
Documentation was out of scope, but I felt it was necessary to at least provide basic reference sheets for text and color styles, adding brand guidelines where applicable.
This kit provides designer-friendly annotation tools. It’s based on Figma’s recommended best practices for organizing libraries. It includes a Cover Kit, a Reference Sheet Kit, and Sticky Notes.
Although we were only working on the website, I built the design system to apply globally. Brand & Color was the core library, the single source of truth for logo assets, brand color styles, and documentation.
We needed extra colors. Since they were not brand colors, they were added to the Web Library. Looking back, I found this approach to be awkward. In the future, I plan to use a single color library for all digital colors.
Text sizes and spacing were defined in the Web Library. Style sheets were created, organized by type and platform, to make finer details much more discoverable.
I used component properties and nested instances throughout the library. Properties are fantastic, reducing the number of variant we have to build. In contrast, nested instance turned out to be cumbersome, particularly when many are required.
Each component has a secondary component containing variants for responsive layouts. The two components contain the exact same properties, so they're easy to swap. The purpose was to reduce the complexity of frequently-used components.
Page mockups were separated into sections and each section was converted into a block component. These templates were designed to be detached when needed, and retain the internal components and auto-layout settings.
For the last few design systems I've built, I've always used a Brand & Color library for color styles. Lookout required accessible colors for the website, so I added them in the Web Library. But these colors will probably be needed for email campaigns, ad banners, and other use cases. I think it will be better if colors are defined in a dedicated library.
Component properties are a game changer, significantly reducing the number of variants we need to build. In contrast, instance swapping can be awkward to use, especially when many are needed. I'm satisfied with how I use properties, but I'm going to avoid using instance swapping when possible. This means flatter, redundant layouts. But they'll be easier for designers to use.
Figma makes it easy for stakeholders and others to leave comments directly on designs, but their ephemeral nature makes them difficult to maintain.
I'm considering two new file types to manage progress and feedback:
Touchstone files would be for non-designer feedback, such as stakeholder reviews. Comments can be retained, tracked, and archived without cluttering up design files. Shared libraries should NOT be connected to this file, so older layouts are not affected by component updates.
Milestones would function similarly to touchstones but serve as the master deliverables. They would provide a record of final designs and theoretically align to production releases with version numbers.
|
OPCFW_CODE
|
In order to display characters, rooms, outdoor environments, etc, you need to understand how 3D graphics works.
3D graphics involves a lot of mathematics. This includes concepts such as vectors, trigonometry and matrices. Luckily DirectX contains mathematical functions that you can use to process and manipulate 3D graphics.
The graphics in 3D is displayed as a pixel which I will also refer to also as a point or vertex. In 2D graphics, the coordinates are referred to as (x,y) while in 3D graphics, the coordinates are referred to as (x,y,z)
The diagram below compares 2D screen coordinates and 3D coordinates as represented by DirectX:
The 3D coordinate system has an x-axis, y-axis as in the 2D system. The 3D system also includes a z-axis, which goes into the screen (this is called a left-handed coordinate system). It is always hard to visualise 3D coordinates – if you stick 3 rulers together, it might be easier to visualise a point in 3D space. In the above diagram, the point (5,10,5) is obtained by travelling 5 units along the x-axis, 10 units along the y-axis and then 5 units along the z-axis (into the screen).
Overview for 3D Graphics in a game
These topics will be explained in more detail, but I just wanted to give a brief overall picture of what is going on in 3D games, particularly in displaying 3D graphics.
- The game universe (ie. what the player sees in a particular level) is a 3D model called a 3D scene.
- Inside this 3D scene, objects (3D models) are placed inside.
- The player is represented as a 3D model (eg: showing player’s hands, player’s body)
- The game needs to take into account of the player colliding with the 3D scene or other objects, players or bots (computer driven players).
- The computer needs to show on screen only those objects that the player can see. This needs to be done, otherwise the game will slow to a crawl. 3D computer graphics is very computer intensive, you need to take shortcuts to display the graphics with any speed. The displaying of 3D objects to the screen is called rendering.The 3D scenes and other 3D objects are created using 3D modeling software (eg: 3D Studio Max, Maya, Blender). The 3D objects or models are created as a mesh which has a skin or texture applied to it.
3D graphics includes some 3D primitives – these are objects that are just basic and used to build other 3D objects.
DirectX has the following graphics primitives that are part of Direct3D graphics engine:
- point lists – a set of points (x,y,z). Could be used for starfields for a space game.
- line lists – a set of lines, each connected by two points. Could be used as rain in a scene.
- line strips – a set of lines connected to each other (eg: a zigzag).
- triangle lists – a set of 3 points (x,y,z) makes up a triangle. Many of these triangles make up a triangle list. Useful to build 3D objects.
- triangle strips – a number of triangles that have at least one side connected to another triangle. Useful to build some 3D objects.
- triangle fans – triangles are joined together in a fan shape. Useful to build some 3D objects.
These primitives are shown below. I have emphasised the points so that you can see that some of the objects were created from joined up points.
3D objects are built from vertices of triangles. Each point (or corner) of a triangle is a vertex and 3 vertices make up one triangle. More information about vertices is here. These vertices are what the graphics engine uses to display graphics on the screen. The process of displaying an object on the screen is called rendering. The triangle is a closed figure and hence is a polygon.
A 3D object that has been built with many triangles joined together is called a mesh. Once you have the basic mesh for the object, you then create a skin or texture and wrap it around the mesh. When you are talking about the number of polygons in a 3D object, you are really talking about how many triangles it contains.
An example of a 3D object is shown below. It is a tiger taken from one of the DirectX SDK sample files using a program called Meshview. It’s from the DirectX 9 SDK (which is now old software).
Now have a look at the tiger below when you take the “skin” off it. You should be able to see the triangles! This is called wireframe view.
The tiger above, which is a 3D object, was created using 3D modelling software such as 3D Studio Max or Blender. Within this software package, you can apply textures (or skins) as well.
Some 3D modelling software packages are below:
3D Modelling Software
- Blender (Free 3D modeller – runs on Linux too)
- 3D Studio Max (3D modeller by Autodesk)
- Maya (3D modeller by Autodesk)
- Gmax (Free 3D modeller – not supported by Autodesk anymore – “cut down” version of 3D studio max)
- Milkshape (3D modeller)
- Lightwave (3D modeller)
- ZBrush – fantastic for modelling characters
- Google Sketchpad – an easy to learn 3D modeller
- Anim8tor free software
A 3D object is also called a 3D model. You can either buy or download royalty free 3D models. For example, you can get them at CG Trader. Other places are on the resource page as well or you can search online.
|
OPCFW_CODE
|
Observability is Not Just Logging or Metrics
Lessons from Real-World Operations
We generally expect that every new technology will have, along with massive new advantages, also has fundamental flaws that will mean the old technology always has its place in our tool belt. Such a narrative is comforting! It means none of the time we spent learning older tools was wasted, since the new hotness will never truly replace what came before. The term ‘observability’ has a lot of cache when planning serverless applications. Some variation of this truism is ‘serverless is a great product that has real problems with observability.’
The reality is not one of equal offerings with individual strengths and weaknesses. Serverless is superior to previous managed hosting tools, and it is the lack of hassle associated with logging, metrics, measurement, and analytics. Observability stands out as one of the few problems that serverless doesn’t solve on its own.
What Exactly is Observability?
Everything from logging to alerts gets labelled as observability, but the shortest definition is: observability lets you see externally how a system is working internally.
Observability should let you see what’s going wrong with your code without deploying new code. Does logging qualify as observability? Possibly! If a lambda logs each request its receiving, and the error is being caused by malformed URL’s being passed to that lambda, logging would certainly resolve the issue! But when the question is ‘how are URLs getting malformed?’ It’s doubtful that logging will provide a clear answer.
In general, it would be difficult to say that aggregated metrics increase observability. If we know that all account updates sent after 9pm take over 200ms, it is hard to imagine how that will tell us what’s wrong with the code.
Preparing for the Past
A very common solution to an outage or other emergency is to deploy a dashboard of metrics to detect the problem in future. This is an odd thing to do. Unless you can explain why you’re unable to fix this problem, there’s no reason to add detection for this specific error. Further, dashboards often exist to detect the same symptoms e.g. memory running out on a certain subset of servers. But running out of memory could be caused by many things, and provided we’re not looking at exactly the same problem saying ‘the server ran out of memory’ is a pretty worthless clue to start worthless clue to start with.
Trends Over Incidents
Real crises are those that affect your users. And problems that have a real effect on users are neither single interactions nor are they aggregated information. Think about some statements and whether they constitute an acute crisis:
- Average load times are up 5% for all users. This kind of issue is a critical datum for project planning and management, but ‘make the site go faster for everyone’ is, or should be, a goal for all development whenever you’re not adding features.
- One transaction took 18 minutes. I bet you one million dollars this is either a maintenance task or delayed job.
- Thousands of Croatian accounts can’t log in. Now we actually have a trend! While we might be seeing a usage pattern (possibly a brute force attack), but there’s a chance that a routing or database layer is acting up in a way that affects one subset of users.
- All logins with a large number of notifications backed up are incredibly slow, more than 30 seconds. This gives us a nice tight section of code to examine. As long as our code base is functional, it shouldn’t be tough to root out a cause!
How Do We Fix This?
1. The right tools
The tool that could have been created to fix this exact problem is Rookout, which lets you add logging dynamically without re-deploying. While pre-baked logging is unlikely to help you fix a new problem, Rookout lets you add logging to any line of code without a re-deploy (a wrapper grabs new rookout config at invocation). Right now I’m working on a tutorial where we hunt down Python bugs using Rookout, and it’s a great tool for hunting down errors.
Two services offer event-based logging that moves away from a study of averages and metrics and toward trends.
- Honeycomb.io isn’t targeted at serverless directly, but offers great sampling tools. Sampling offers performance advantages over logging event details every time.
- IOpipe is targeted at serverless and is incredibly easy to get deployed on your lambdas. The information gathered favors transactions over invocations.
2. Tag, cross-reference, and group
Overall averages are dangerous, they lead us into broad-reaching diagnoses that don’t point to specific problems. Generalized optimization looks a lot like ‘pre-optimization,’ where you’re rewriting code without knowing what problems you’re trying to fix or how. The best way to ensure that you’re spotting trends is to add as many tags as are practical to what you’re measuring. You’ll also need a way to gather this back together, and try to find indicators of root causes. Good initial tag categories:
- Account Status
- Connection Type
- Request Pattern
Note that a lot of analytics tools will measure things like user agent, but you have to be careful to make sure that you don’t gather information that’s too specific. You need to be able to make statements like ‘all Android users are seeing errors’ and not get bogged down in specific build numbers.
3. Real-word transactions are better than any other information
A lot of the cross-reference information mentioned above isn’t meaningful if data is only gathered from one layer. A list of the slowest functions or highest-latency DB requests indicates a possible problem, but only slow or error-prone user transactions indicate problems that a user somewhere will actually care about.
Indicators like testing, method instrumentation, or server health present very tiny fragments of a larger picture. It’s critical to do your best to measure total transaction time, with the as many tags and groupings as possible.
4. Annotate your timeline
This final tip is has become a standard part of the devops playbook but it bears repeating: once you’re measuring changes in transaction health, be ready to look back about what has changed in your codebase with enough time accurately to correlate it with performance hits.
This approach can seem crude: weren’t we supposed to be targeting problems with APM-like tools that give us high detail? Sure, but fundamentally the fastest way to find newly introduced problems is to see them shortly after deployment.
Wrapping Up: Won’t This Cost a Lot?
As you expand your logging and event measurement, you should identify that the logging and metrics become less and less useful. Dashboards that were going weeks without being looked at will go months, and the initial ‘overhead’ of more event-and-transaction-focused measurement will pay off ten-fold in shorter and less frequent outages where no one knows what’s going on.
|
OPCFW_CODE
|
using EPiServer.Core;
using System.Collections.Generic;
using System.Web.Mvc;
namespace bmcdavid.Episerver.CmsToolbox.Rendering.Strategies
{
/// <summary>
/// Adds a first/last css class to content area items
/// </summary>
public class FirstLastItemRenderingStrategy : IContentAreaItemRenderingStrategy
{
/// <summary>
/// Default First/Last strategy
/// </summary>
public static FirstLastItemRenderingStrategy Default = new FirstLastItemRenderingStrategy();
private readonly string _firstCssClass;
private readonly string _lastCssClass;
/// <summary>
/// Constructor
/// </summary>
/// <param name="firstCssClass"></param>
/// <param name="lastCssClass"></param>
public FirstLastItemRenderingStrategy(string firstCssClass = "first", string lastCssClass = "last")
{
_firstCssClass = $" {firstCssClass}";
_lastCssClass = $" {lastCssClass}";
}
/// <summary>
/// Renders items
/// </summary>
/// <param name="htmlHelper"></param>
/// <param name="contentAreaItems"></param>
/// <param name="contentItemRenderer"></param>
public void RenderContentAreaItems(HtmlHelper htmlHelper, IEnumerable<ContentAreaItem> contentAreaItems, IContentItemRenderer contentItemRenderer)
{
using (var iter = contentAreaItems.GetEnumerator())
{
var isFirst = true;
if (iter.MoveNext())
{
var renderItem = iter.Current;
do
{
var cssClass = string.Empty;
if (isFirst)
{
cssClass = _firstCssClass;
isFirst = false;
}
var currentItem = renderItem;
renderItem = iter.MoveNext() ? iter.Current : null;
if (renderItem == null) { cssClass = _lastCssClass; }
contentItemRenderer.RenderContentAreaItem
(
htmlHelper,
currentItem,
contentItemRenderer.GetContentAreaItemTemplateTag(htmlHelper, currentItem),
contentItemRenderer.GetContentAreaItemHtmlTag(htmlHelper, currentItem),
contentItemRenderer.GetContentAreaItemCssClass(htmlHelper, currentItem) + cssClass
);
} while (renderItem != null);
}
}
}
}
}
|
STACK_EDU
|
Now that I have an easy-to-use tool for editing my blogroll, I’ve come to a somewhat startling realization. It’s really difficult to decide what order the links should appear in! (I don’t envy you, Dave!)
Now I’m not exactly a newbie at this website thing, which is part of why I was so surprised. Something changed in a big way since the last time I designed a set of navigation links that weren’t solely utilitarian: There are a whole lot more things to link to. The last time I put any serious effort into a (somewhat) personal set of links was in late 1999, when I did the template for Jake’s Brainpan. It’s been a while.
I started sorting things out about an hour ago. The important links floated to the top pretty quickly, but it steadily became more and more difficult to sort them. After all, what criteria should I use? Here are some thoughts I had about it:
1) How often do I visit the link? This is a very good gauge of how important the link is to me, but not very good for determining how important it might be to someone else.
2) How important is the site’s author to me personally? Again, a very good indication for me, but not necessarily so useful to others. Also, the people who are really important to me might not update that often, so having the link near the top of the list might not even be that useful to me.
3) Am I an author or contributor? Obviously sites I maintain or contribute to should get some extra weight, but again, lots of sites I maintain don’t get updated all that frequently (if at all), so how does that weigh in?
4) How often does the site get updated? This could easily be #1, but then I’d probably automate the order of the links (duh!).
5) How closely does the link relate to my work and my message? This is really difficult to evaluate, and may change from day to day more often than the previous four criteria.
So, I took a stab at it.
Number one has to be mom. No two ways about it, she’s the most important person in my life besides myself, and even though her site ranks somewhat lower on the other four criteria, she gets the top link. Plus I like the way she writes. (Hi Mom!)
Number two is totally gratuitous. I’m the most important person in my life, so it only makes sense that I’d be near the top of my own links. Of course, I could have made this the first link, but something told me that the top link should be to someone else.
The next three links are to UserLand people. They had to be there considering how important UserLand is to me, and even though I visit Dave’s site more than John’s or Lawrence’s, it made sense to group them together.
And this is where it gets fuzzy. The next 10 or so links are either ones I visit very often, or ones that I think are important for me, for my readers, for people who might be interested in the same things I’m interested in, or for other people that I link to.
After that it’s a free-for-all. Most are people. Some are sites I have a hand in. Some are work-related, and some are not.
If you think you appear too far down on the list (Aaron ), please accept my sincere apologies regarding the nature of the printed word. It’s not my fault that lists have an order, and that you can’t be in two places at once….. (or can you?)
The conclusion I reached is that blogrolls absolutely beg for hierarchy. Imagine the following scenario:
Links organized into categories, with each link ranked (by order) within the category. Any link can appear in more than one category, with a different ranking depending upon your percieved relevance. Categories can be nested within one-another.
Mind-bomb #1: A category can be the blogroll from another of your sites.
Mind-bomb #2: A category can be the blogroll from anyone’s site.
|
OPCFW_CODE
|
Game state is automatically saved to the cloud when you are playing. The last 5 saves are stored in the “Latest” group, if you want to find older saves you can open the “Daily saves” for snapshots from each day.
Not signed inTo use cloud saves you need to sign up.Signing up is free and will take just a few minutes to complete. With a GX.games account you can enjoy several additional features, so don’t wait and sign up already today!Sign up
No challenges exist for this game.
A & DWalk Left or Right
MScythe Attack (Gun if in the air)
M (Big Shot Meter Full)Big Shot Attack
N (on air)Double Jump
NNeutral Attack (Double Jump if in the air)
N + WalkSide Attack
IChange Card Deck
Archa (Half-Alpha Demo)
As Archa, fight enemies, use your cards, and collect items and coins to raise your score to complete the first level in the Half-Alpha demo for this upcoming action-packed platformer!
ControlsTitle Screen: Left/Right Arrow Keys: Next Option Enter: Select Gameplay: A/D: Walk Space: Jump M: Scythe Attack (Gun when on the air, Big Shot when the meter is full.) Space/A/D + N: Special Attacks
- N: Butterfly
- A/D + N: Purple Flames
- Air N: Double-Jump
StoryThis will be included in the full game.
Full Game Release DateProbably around the middle of the 2020's.
What's new?0.A.051 Changelog:
- Fixed a bug where Camy's dialogue would appear twice if the player is too close to them.
- Archa's design and animations have now been completely revamped!
- The First level is now completed!
- Added golden items
- Added Camy
- Added a dialogue system
- Added a card system
- Changed the Title Screen and its Music
- Removed the Minigun Item. It's now usable as a card.
- Added Music composed by OPagel (me)
- Changed the score system
- Added Thigs and Clops enemies
- Reworked Snow City's tileset
- Added Trees to the background
- Added a level end goal
- Added an automatic game-balancing system
- Removed the small flame projectile from the Double Jump
- The bullet's speed has been increased
- The game has been rebranded to Archa.
- Balancing Changes.
- The UI is now in the bottom of the screen.
- Revamped Title Screen.
- 3 new logo intros.
- Archa now has a minigun attack and a cyan-and-pink bow.
- Background elements have been added and modified.
- juicy ketchup.
- The Big Shot now has an effect when used.
- The Big Shot meter can now be filled up by attacking enemies.
- The hampter's location has been changed.
- The Double-Jump now shoots a small flame that can attack enemies.
- Removed the non-functioning challenge and its code.
|
OPCFW_CODE
|
When pulling data from the blockchain, most of the information returned is stored in binary format so as to save space and allow for quicker transmission of data. This makes it difficult to sift through, as it first needs to be decoded into a readable format (ex: JSON). dfuse has recently expanded its GraphQL endpoints to include ABI-decoded table rows inside dfuse Search results.
Before today’s announcement, multiple REST calls were required to achieve the results below, leading to synchronization and latency issues. Now GraphQL turns it into a single query, with a tailored payload, through a real-time stream (in a GraphQL Subscription).
dfuse Tracks Deep Database Operations
Let’s take the example of an EOS token transfer. Say Alice has 10 EOS and Bob has 5. If Alice transfers 1 EOS to Bob, then the
eosio.token contract will need to update both rows: one for Alice and one for Bob’s accounts.
View of database operations on eosq.app
These are known as database operations. dfuse is the only one that provides these state deltas down to the the action-level. This level of granularity allows you to track down any issues in your contracts, as you can see the side effects of each action’s execution, even if your transaction has 25 actions.
nodeos provides a view of the last state (content of tables for the different contracts), and it is always moving. You can think of this in terms of other history providers giving you just a recap of the information, whereas dfuse provides each detail.
This new feature ensures you get an ABI-decoded JSON view of the table rows, and takes into account any changes to the on-chain ABI. Querying historical transactions always uses the relevant historical ABI.
Some contracts provide an invalid ABI, one inconsistent with its transactions or no ABI at all. In these cases, you will find the
error field useful, and the
object field will return
Announcing a Unified View of Transactions And State Changes Through the dfuse GraphQL Endpoints
Let’s consider the query to view vote pay (
eosio.vpay) actions for the account
"receiver:eosio.token action:transfer data.from:eosio.vpay data.to:eoscanadacom":
In the above view (and with all other history solutions), you only receive the quantity being transferred (
274.9777 EOS). However, if we add in:
You will now receive:
dbOps for both
eoscanadacom are nowhere to be found with other history solutions.
With dfuse, you can now have a precise view of your balance after each action (or of any table) committed after any action. This is a non-negligible advantage for accounting purposes!
So if you’re after the full story and not just a recap, dfuse is the only platform that should be feeding your dapp with information.
Join the discussion in the dfuse API Telegram channel. Get started for free and learn why many dapp developers are now feeding their backend with information from dfuse.
|
OPCFW_CODE
|
Urho3D is a great game engine. Then how about a game that is about Urho3D itself? As a result, I made an auction type online board game “UrhoCraft” with World_Gate platform. (For World_Gate please see the other post)
The goal of this game is to remember the community’s history. You can see the major contributors and some projects made with Urho3D in this game. The images are collected from forum.
Here is a demo video.
4 players will use resource cards to try to finish projects.
Each project requires certain skill levels. The requirement can be hard to meet, so players need to cooperate and finish the project together.
Each project offers certain points when finished. Players need to negotiate and compete with each other to get the best possible points.
When game ends whoever has the highest points wins the game.
Game rules in a nutshell: if you know the board game I’m the Boss, you are good to go.
Game rules explained in details:
4 players start with 5 cards in hand. The game continues for 24 turns.
In each turn, a new project will be shown on field as well as the required skills to finish it and the points it offers. Now players can freely use cards and claim how many points they want.
There’re two types of cards. One is contributor. This type has skills on it and can be placed onto field. The other type is tactical card, which has special effects to make the game more interesting.
One player will become host for this turn. The host can decide which players can participate in this project by clicking the avatars. When project requirements are met, that is there’re enough contributors to meet the skill level and points are distributed properly, host can conclude the deal. All participants will get the points they want and use the contributors on field. The host will get an additional card as bonus. The host can also choose to pass the project and draw 2 cards instead. Whatever the choice, the turn will end.
There’s an additional element in this game. Each project has a type: blue, green or yellow. Each player will get a secret goal at the start of the game. The goal will be something like if you finish certain type of projects, you will gain additional points in the end.
When 24 turns have passed, the game ends. Whoever has the highest score wins the game.
How to play:
Go to this page and download World_Gate client.
(The client is made with Urho3D. It is free to use, collects no personal info, no background activity)
After starting the client, click search button and search by name. You can download this world now.
When the world is ready, click the flag button in the welcome window and you can spot Urhocraft.
Quick join and enjoy!
Right now all servers are located in NYC. Please have some patience if you live outside US.
Waiting for 4 players may not be an easy stuff.
|
OPCFW_CODE
|
I. Spent. AT LEAST 15 HOURS ON THIS GODDAMN CODE. Endless stretches spent on a little blue couch, waking up early, leaving my physical form to be cOnSuMeD by the lab. ENDLESS. Helper functions. Helper functions for the helper functions. Functions to make test cases. Helper functions for the functions to make test cases to help me figure out what the FLIPPIN FRACK was wrong with my code. I wrote in so many print statements; I basically made my code recite back to me EXACTLY what it was doing in scripts a mile long. But still, for 15 hours, I was stuck in the exact. Same. Spot. It made no sense, none at all. Virtual office hours? I’ve literally never heard of anything worse. Just the idea of a virtual office hour FILLS me with dread. But I went anyways, twice, and I got advice, and it helped! jUsT kIdDiNG it didn’t lololololololololol XD. And then I hit this point, this glorious point, where I just didn’t care enough to keep going. Ah, bless the liberation of burnout-born apathy! So I left it half done as the late deductions incurred. I went on with my goddamn life. I made curry. I moved my roommate from MIT into the Maine house (!!!). I wrote writing thingies. 11 out of 17 cases was okay because, y’know, the world is like basically ending and I can just become *one with the nature* here in Maine and then MIT classes won’t matter. All was well, right?
But then. THEN. I had a checkoff two days ago. If I wanted even the measly 11 test cases I actually did pass, I would have to subject myself to a virtual checkoff. UGH. I HATE THIS VIRTUAL THING. So I go to clean up the comments and the obscene amount of print statements in my code, and JUST so the LA can see the cleaner version, I re-submit. But y’know what happens? DO. YOU. KNOW. WHAT. ACTUALLY LITERALLY FOR REALSIES. HAPPENS.
I pass three more test cases. And they won’t improve my grade at all XDDDDDDD.
This means that MY CODE. The code I had a WEEK before the checkoff would’ve improved my grade soooooo much if I had just s.u.b.m.i.t.t.e.d.i.t.
Do u know what it’s like. To have MIT pset dread. Mix with pandemic-fueled Nihilism?
It makes you write this blogpost.
I want a better grade in 6.009. I want to be back at MIT and swing on the rope swing in the East Campus courtyard. I want to grill veggie burgers and laugh and hug my friends without worrying that I might actually kill them with a virus I may or may not be carrying. I want 15 HOURS of work to amount to something —ANYTHING— because that would make sense. I want to not sleep and give all of you the best CPW there’s ever been because THAT would be fair.
There’s no lesson. Just screams. Thanks for reading.
|
OPCFW_CODE
|
Debugging sensors on a microprocessor can be a hassle and the most used approach is to output the sensor values to a serial monitor. Realtime plotting is a better and more visual way of doing the same thing.
- Uppdaterar plot i realtid medan det fortfarande behandlas av mikroprocessorn
- Plots live data from serial port. Microprocessor choice does not matter as long as it can send serial data to your computer.
- 6 channels of data (and this can be increased if necessary)
- Stapeldiagram i realtid
- Linjediagram i realtid
- You just send the data you want to debug with a space as delimiter like this "value1 value2 value3 value4 value5 value6". Floats or integers does not matter.
- Open source
- Robust. It will not crash because of corrupt data stream or similar.
- Multi platform Java. Tested on OSX and Windows 8 (and should work on Linux as well).
I created this software to debug an Arduino Due on my self-balancing robot. To tune the controls of the robot I needed fast feedback to know if I was making progress or not. The video below demonstrates typical use of the realtime plotter:
Installera och använda
Since I have an Arduino I will use it as example but any micro processor can be used.
- Ladda ner ProcessingIDE to run the code. It is a neat and useful IDE for doing graphical stuff.
- Download controlP5 gui library och packa upp den till mappen för Processing libraries
- Connect the Arduino to the usb or serial port of your computer.
- Upload the example code (RealtimePlotterArduinoCode) to the Arduino
- Check serial monitor (at 115200) and check that it outputs data in the format "value1 value2 value3 value4 value5 value6".
- Close the serial monitor (since only one resource can use the serial port at the same time).
- Open the Processing sketch and edit the serial port name to correspond to the actual port ("COM3", "COM5", "/dev/tty.usbmodem1411" or whatever you have)
- Kör koden
The realtime plotter can be expanded to also send commands to the microprocessor. The usual approach when programming microprocessors is to set some parameters in the beginning of the code, upload them to the processor, see the result, change the parameters again, upload, and so on until satisfactory performance is achieved. This iterative process takes a lot of time and a better approach is to send updated parameters to the microprocessor from your computer via serial data. For example I needed to tune some parameters on my robot and created a command panel that runs in parallell with the realtime plotter. For each change in parameters I immediately am able to see the result on the plotting screen. Example code of this is located in /RealtimePlotterWithControlPanel.
I decided to send and receive the data as ascii characters instead of binaries. The greatest disadvantage is performance and ease of use is the main advantage.
In some sense the realtime data plotter can also be used as a very slow and limited digital oscilloscope. I would not recommend using it for any high frequency applications though.
Some comments about earlier approaches and the used libraries
I have tried many different ways of doing this. My first approach was Matlab but I had problems with it locking the serial port. It was a hassle to get it working and getting everything configured takes to much time. My second approach was Python and graphing libraries but this was still not very satisfactory. The Processing language together with a graph library and ControlP5 made the whole thing much easier.
|
OPCFW_CODE
|
It's time for some new crypto words! Here we are with Part 4 of these series. Maybe you have already meet some of these words in different contexts and were strange to you or if it's for the first time you see them even better that you have the chance to understand them.
1. NFT = Non-Fungible Token - it's a cryptocurrency token that represents an unique asset either real-word (eg. piece of land) or digital asset (eg. weapon in a game); in general NFT's are unique, indivisible and scarce; eg. in the Ethereum blockchain you can create NFT's using standards ERC-721 and ERC-1155; you can take a look at CryptoKitties;
2. ERC-721 = it's an Ethereum standard for NFT's; basically it's a subset of Ethereum tokens;
3. ERC-1155 = it's an Ethereum standard for contracts that manage multiple token types; you can create both fungible (eg. currencies) and non-fungible assets on Ethereum Network; the cost of transferring token is reduced because using this standard transactions could be bundled together;
4. ERC-20 = standard protocol for Ethereum issuing tokens; it comes from Ethereum Request for Comments; it governs the tokens on the Ethereum blockchain;
5. ETHER = it's the fuel of Ethereum; the transactional token on the Ethereum smart contracts;
6. WEI = GAS = smallest denomination of Ether, 1 Ether = 1,000,000,000,000,000,000 Wei (10-18); you can seen it as a fee for transactions;
7. GWEI = it measures the cost of gas for transactions on Ethereum network;
8. ROADMAP = it's a plan where we can see the future and the past achievements of a project; also we can know what are the next features to be implemented;
9. ROI = Return On Investment - it's a performance measure that evaluates the efficiency of an investment; it has also a formula: ROI = (value of investment - cost of investment) / cost of investment;
10. SOLIDITY = programming language to implement Smart Contracts (eg. used in Ethereum Smart Contracts, but not only);
11. XBT = another abbreviation for Bitcoin (BTC), it's an official ISO 4217 standard (International Organization for Standardization);
12. YIELD FARMING = it is described as a process that can generate most return of your crypto investment; it's something like "put the money work for you"; DeFi has a big role here as it can make possible yield farming;
13. FAUCET = it's an application or an website that helps to increase awareness of a project by offering free coins for some cryptocurrencies;
14. OPEN SOURCE = the code is public and anyone can access it; programmers can understand what's happening "behind the scenes" and they also can find bugs in the code or vulnerabilities;
15. PONZI SCHEME = it's an illegal scam where are promised big profits for investors; usually early investors are paid with money from later investors;
Hope you have learned something new today. Please take into consideration to read also Part 1, Part 2 and Part 3 (links down below in the Resources section) where you can find other interesting crypto words.
Let me know in the comments what words were new to you, if any.
Thanks for reading and have a nice day,
|
OPCFW_CODE
|
Statics and equilibrium of a rigid body
A vertical cylindrical container contains within it three identical spheres of equal weight P which are tangent to each other and also to the inner wall of the container. A fourth sphere, identical to the previous ones, is then superimposed on the three spheres as illustrated in dotted. Determine the respective intensities of the normal forces as a function of P which the vessel wall exerts on the three spheres.
Very interesting question, but I saw a resolution and could not understand why there are no contact forces between the base spheres. thanks in advance
Without the top sphere, the bottom spheres are in casual contact with each other and the walls of the cylinder (no contact forces). However, each would exert a force of P on the bottom, if any, of the cylinder. With the top sphere in place, it would tend to push the bottom spheres apart applying forces to the side walls, but still not to each other.
thanks! so if there were the three spheres at the base, would there be no contact forces between them and not at the sides? why?
I have decided to respond in the form of an answer rather than comment. Hope it helps.
Very interesting question, but I saw a resolution and could not
understand why there are no contact forces between the base spheres.
thanks in advance
Here are my thoughts, though I admit they may be debatable.
I think we need to address the question in two parts. One with the upper sphere not in place and one with the upper sphere in place.
No sphere on top:
In my mind, this one is a bit tricky. We can say that the separation of the spheres from one another is "zero" if any attempt to bring them a tiny bit closer results in a "measurable" (non zero) force that tends to push them apart, and that increases rapidly the closer you attempt to bring them together. Conversely, any tiny increase in separation from this "zero" position would yield no measurable force between them. So approaching the zero separation from either direction we may consider the forces between the spheres approach zero in the limit.
We may arbitrarily define this zero separation as "touching" where the contact force is, in the limit, zero. One thought experiment would be to ask what would happen if the cylinder was removed? If there was no friction involved at any of the contacts, would the spheres separate, indicative of repulsive contact forces between them?
Sphere on top:
This is probably more straight forward. Starting with the assumption that the lower spheres are just "touching" (as defined above), clearly there will be a component of the weight of the upper sphere acting on each of the lower spheres that will tend to push the spheres apart and press them against the cylinder walls. Thus we may say that the forces between the bottom spheres should definitely be be zero due to influence of the top sphere. Once again this assumes frictionless contacts throughout.
Hope this helps.
Thanks!! Excellent
Have you the collection of the downvotes? :-)
@Sebastiano Don’t understand what you mean. What collection of downvotes?
I have seen your answers and there are many with -1 score.
@Sebastiano What do you mean by “many” and why do you ask?
Because I have not understood in this site there are lot of downvotes with good answers. I have finished to write :-)
@Sebastiano The problem is most downvotes do not give you the reason(s), so there is no way to know whether or not they are justified. When a good justification is give, the downvote can be helpful (hey, we all make mistakes). But when there is no justification, they are not helpful. When I started on this site 2-1/2 years ago they used to upset me, but now I just accept them as a fact of life. Ciao.
My best regards....:-) and thank you very much for your contribute.
|
STACK_EXCHANGE
|
Why is pylint not satisfied with my working code?
I am using VS Code for writing python but I am having issues regarding pylint.
I have a basic file structure
.env
-src
__init__.py
-module1
__init__.py
-file1.py
-file2.py
-module2
__init__.py
-file.py
main.py
If I import some_method in main.py like so: from module1.file1 import some_method the code runs as it was intended but pylint is not satisfied and says Unable to import module1.file1.
If I import it like so: from src.module1.file1 import some_module pylint is then satisfied but it breaks my code (this isn't how it's supposed to be imported based on my file structure), returning an error saying "No module named 'src' " which is what I expect.
I tried searching for solutions specific to pylint in vs code but none have worked. I keep getting answers or 'solutions' saying it has to do with the path the pylint is executed on.
I am running a virtualenv in the same folder level as my 'src' folder is with pylint installed in that virtualenv with python3.6. Is this a path issue in the settings or am I overlooking something obvious?
What is the working directory for pylint? Lacking any __init__.py files, you technically don't have any explicit packages. However, I think that the imports from main.py, because it lives in src, treat any directories in the same directory as packages. pylint appears to use whatever directory contains src as an "implicit" package.
I have __init__.py files in each directory (I failed to include that in my question.)
Is there one in src/? That (combined with the difference in what python and pylint may be treating as the implicit package) could explain it.
Yes there is. The problem with the former way of importing as in my question is not that the code won't run, it only makes pylint spit out an error but with the later, my code breaks but satisfies pylint. Don't know if this helps but my virtualenv folder and /src folder are on the same level in my directory tree
Including an __init__.py file in src is likely incorrect. More likely the actual issue is that you're trying to run pylint src, when what you want is to run pylint from the src directory.
One solution that you could use would be to reference the module relatively using . before the module:
from .module1.file1 import some_method
Sorry, my pc forced restart and I wasn't finished editing my question. I cleared up my file structure a bit with the the import method I'm using. Hope this clears it a bit
|
STACK_EXCHANGE
|
Jamnovel Imperial Commander: His Pretty Wife Is Spoiled Rotten – Chapter 990 – Qiao Ximin Schemes carriage disappear reading-p2
laws of the other world spoiler
Novel–Imperial Commander: His Pretty Wife Is Spoiled Rotten–Imperial Commander: His Pretty Wife Is Spoiled Rotten
Chapter 990 – Qiao Ximin Schemes truck tow
College or university cla.s.ses ended up beginning soon. Qiao Ximin realized that if she might make pals with Yun Xi, she could use that internet connection to have opportunity to satisfy on top of the Fresh Commander 1 day.
Yun Xi stared with the mature who has been questioning the problem and smiled nicely. “The Health-related Classes, could I question which course will it be in?�
Just about every nook on the campus was full of freshman orientation banners. As Yun Xi walked over the routes, she could see most people bundled within each faculty, as well as the whole arena was humming with thrills.
Qiao Ximin sat in their own vehicle and screamed. This was her only way of relieving the fury and turmoil that was bottled up inside of her.
Qiao Ximin noticed like she was on the side of a emotional fail once she observed that Qiao Lixin experienced achieved with those about the Youthful Commander. However he did not match the Fresh Commander still, he experienced successfully negotiated the venture regardless.
In an instant, Yun Xi was flanked by wondering seniors.
It was actually difficult to examine within the Professional medical Classes, also there weren’t very many individuals who were definitely ready to sign up for the most important. Furthermore, she was the only real female to participate this year. The sex imbalances was severe.
She recalled the achieving he got got with Yun Xi yesterday. Almost no time possessed pa.s.sed since that time, in which he acquired already obtained his on the job this chance. That was no coincidence.
Everyone was extremely interested in learning her. Since that time they had identified she possessed decided on to go to Health care Classes, that they had saved their view shut on the Health care University, seeking to hook a peek at the scholar.
the life of mary queen of scots timeline
Section 990: Qiao Ximin Plans
“You must be the Yun Xi who has been the top scholar in three subject matter.� Immediately after she obtained established her identify, an additional senior citizen known as out like to relay your message.
She acquired always obtained sensations for him, and her greatest purpose obtained for ages been to get married to this person of her dreams.
She couldn’t are convinced that Qiao Lixin were one to have this chance fall into his arms.
The Baronet’s Bride
“Are you Yun Xi from Jingdu Secondary School?�
As soon as she regained her composure, she compelled herself to have one more have a look at Yun Xi.
In the sage natural green classic gauze outfit, she retained herself along with the gentleness of your proven lady of the normal water towns of your to the south. Anybody who checked out her felt as though they had been sent to the well-known poem “A Lane on the Rain� by Dai w.a.n.gshu.
“Hey, do y’all believe that she appears like the most known scholar in three subject areas who was noted inside the newspapers?�
She couldn’t believe Qiao Lixin was usually the one to own this chance succumb to his hands and fingers.
Anyone mustered their daring and approached her coming from the masses. “Hey there, which college will you be from?�
School cla.s.ses have been starting up in the near future. Qiao Ximin realized that if she will make good friends with Yun Xi, she could use that internet connection to get an possibility to connect with program the Youthful Commander some day.
She couldn’t are convinced that Qiao Lixin ended up being normally the one to acquire this opportunity belong to his arms.
Yun Xi nodded and appeared toward him in confusion. “I am, is something the matter?�
the master builder pdf
The entire Health Institution acquired much less folks than one cla.s.s during the College of economic.
Yun Xi stared at the senior citizen who was asking the query and smiled pleasantly. “The Medical School, may possibly I inquire which direction could it be in?�
Section 990: Qiao Ximin Systems
She experienced always possessed sentiments for him, and her ideal target had for ages been to get married this man of her ambitions.
She recalled the reaching he acquired experienced with Yun Xi last night. Almost no time acquired pa.s.sed ever since then, and then he experienced already gotten his on the job this chance. It was no coincidence.
She was looking for the building when she observed elderly people through the nearby faculties appeared to be reviewing her with good interest.
Qiao Ximin felt as if she was on the side of a psychological failure once she heard that Qiao Lixin had fulfilled with those about the Small Commander. However he did not match the Small Commander yet, he had successfully negotiated the endeavor at any rate.
She had always obtained sentiments for him, and her ideal intention got for ages been to get married to this guy of her hopes and dreams.
“She does dress like her…�
Everybody was extremely curious about her. Since they had identified that she experienced picked out to attend Professional medical University, they had kept their sight locked for the Health Classes, aiming to hook a glimpse of the scholar.
Yun Xi stared on the older who was inquiring the question and smiled nicely. “The Medical Institution, may I check with which course would it be in?�
Positioning an engine oil-newspaper umbrella, her attractive fragrance was as stunning being a work of art, a splendor which was incredible and unique.
Qiao Ximin sat in their vehicle and screamed. It was her only technique of discharging the anger and hardship that has been bottled up on the inside of her.
vampire ruler’s strong bride
In her sage green classic gauze costume, she presented herself with the gentleness connected with an identified female of the standard water towns from the south. Anyone who viewed her sensed just like that they had been transported towards the popular poem “A Lane in the Rain� by Dai w.a.n.gshu.
gorgeous.nepal online shopping
Figuring out Qiao Lixin’s abilities and marketing ability, she was aware he would not be able to speak to the Young Commander on their own, neither would he possibly have a chance of obtaining a task. Yun Xi essential furnished him with this particular possibility.
“You needs to be the Yun Xi who was the most known scholar in three themes.� When she experienced confirmed her label, one other senior termed out as if to relay the message.
|
OPCFW_CODE
|
The Evolution of Human Technology
For a long time, human technology was limited to our brains, fire, and pointed sticks. However, as time went on, these crude tools evolved into nuclear power plants and atomic bombs. Our greatest creations have always come from our brains. Since the 1960s, the power of our computers has exponentially increased, allowing them to become smaller and more powerful. However, we are now reaching the physical limits of this technological progress. The components of computers are approaching the size of an atom, which poses a problem. To understand why this is a problem, let’s delve into some key points.
The Basics of Computer Components
An ordinary computer is made up of very basic components that perform simple tasks. These components represent data, the reasons for manipulation, and control mechanisms. Electronic chips are made up of modules that consist of logic gates, which are made up of transistors. A transistor is the simplest form of data processing in a computer. It acts as a switch that can open or close access to the data passing through it. This data is made up of bits, which can be either 0 or 1. Groups of bits are used to represent more complex information. Transistors are combined to form logic gates that perform simple actions. For example, an AND gate sends a 1 if all of its inputs are 1, otherwise it sends a 0. By arranging logic gates, modules can be created that can add numbers. Once you can add, you can also multiply. And when you can multiply, you can do anything. When all basic operations are simpler than first-grade math, you can think of a computer as a group of 7-year-old children answering simple math questions. A sufficiently large number of them can solve anything from astrophysics to Zelda.
The Challenge of Shrinking Components
As computer components become smaller and smaller, quantum physics comes into play. In simple terms, a transistor is just an electrical switch. Electricity is the movement of electrons from one point to another. Therefore, a switch is a pathway that can block electrons from moving in one direction. Today, the average size of a transistor is 14 nanometers, which is 8 times smaller than the diameter of the HIV virus and 500 times smaller than a red blood cell. As transistors shrink to the size of a few atoms, electrons transfer themselves to the other side of the blockage through quantum tunnels. In the quantum realm, physics works differently, and traditional computers are a bit lost… (well, a lot lost). We are facing a true physical barrier to technical progress.
The Rise of Quantum Computers
To overcome this problem, scientists are attempting to harness the unusual quantum property by creating quantum computers. In conventional computers, bits are the smallest unit. Quantum computers use qubits, which can be both 0 and/or 1. A qubit can have any value in the quantum system as it rotates in a magnetic field or a photon. 0 and 1 are the possible values and are represented by the vertical or horizontal polarization of the photon. In the quantum world, a qubit may not be in just one state, but in any proportion of the 2 states at the same time. This is called superposition. However, as soon as you test its value by sending the photon through a filter, the photon must be polarized either horizontally or vertically. So, as long as it is unobserved, it is in a superposition of probabilities between 0 and 1, and we cannot predict the result. The moment you measure it, the state of the qubit is defined. Superposition changes everything.
The Power of Quantum Superposition
Classical 4-bit computers have 2 to the power of 4 different configurations, which makes 16 possible configurations, of which only one can exist. With qubits, thanks to superposition, they can have all combinations at the same time! This number exponentially increases with each additional qubit. Just 20 qubits can already have over 1 million possibilities in parallel. One of the strange properties that qubits can have is quantum entanglement. This includes the ability of each qubit to change states simultaneously, regardless of distance. This means that measuring 1 entangled qubit allows you to deduce the state of its partner qubit without looking at it.
Manipulating Quantum Qubits
Manipulating qubits is also somewhat perplexing. While a normal logic gate has a defined number of inputs and produces a definite output, a quantum gate computes a superposition input, a probability of rotation, and produces another superposition. Therefore, a quantum computer applies quantum gates to entangle and manipulate probabilities to measure the output, transforming the superposition into a sequence of 0s and 1s. This means that you get a whole bunch of possible results with your arrangement all at once. In reality, you can only measure one result at a time, so you will likely have to retry until you get the right one. But by cleverly harnessing superposition and entanglement, quantum computing would be exponentially more efficient than any traditional computer.
The Applications of Quantum Computing
While quantum computers will not replace our current computers, they are superior in certain cases. One such case is database searching. In order to search for something, a normal computer has to search through each of its files. However, a quantum algorithm only takes the square root of the time required in the previous case. This makes a huge difference for large databases. The most well-known use of quantum computers is in the field of computer security. Currently, online banking information is secured by an encryption system where you provide a public key to encode the messages that only you can decode. The problem is that this public key can be used to calculate your private key. Fortunately, performing all the necessary calculations would take years of failure, but a quantum computer with its superior speed could do it in an instant. Another interesting application is simulation. Simulating quantum systems is very resource-intensive, and even for larger structures such as molecules, we often have very little precision. So why not simulate quantum physics with quantum physics itself? Quantum simulation could shed light on the functioning of proteins and assist in medical research.
The Future of Quantum Computing
Currently, we do not know if quantum computers will be highly specialized tools or a revolution for humanity. We do not know where the limits of this technology lie, and there is only one way to find out: exploration and experimentation.
|
OPCFW_CODE
|
#include <KrisLibrary/Logger.h>
#include "MonotoneChain.h"
#include <iostream>
#include <sstream>
using namespace Geometry;
using namespace std;
Real eval_y(const Vector2& a,const Vector2& b,Real x)
{
assert(x >= a.x && x <= b.x);
assert(a.x < b.x);
Real u=(x-a.x)/(b.x - a.x);
return (1-u)*a.y + u*b.y;
}
Real eval_y(const Segment2D& s,Real x)
{
return eval_y(s.a,s.b,x);
}
Real XMonotoneChain::eval(Real x) const
{
assert(!v.empty());
assert(x >= v.front().x);
assert(x <= v.back().x);
if(v.size()==1) return v.front().y;
assert(isValid());
//should we do log n search? eh whatever
for(size_t i=0;i+1<v.size();i++) {
if(v[i].x <= x && x <= v[i+1].x) {
return eval_y(v[i],v[i+1],x);
}
}
LOG4CXX_FATAL(KrisLibrary::logger(),"Shouldn't get here");
stringstream ss;
for(size_t i=0;i<v.size();i++)
ss<<v[i]<<", ";
LOG4CXX_FATAL(KrisLibrary::logger(),ss.str());
LOG4CXX_FATAL(KrisLibrary::logger(),"x is "<<x);
abort();
return 0;
}
bool XMonotoneChain::isValid() const
{
for(size_t i=0;i+1<v.size();i++) {
if(IsNaN(v[i].x) || IsNaN(v[i].y)) { LOG4CXX_INFO(KrisLibrary::logger(),"NaN!"); return false; }
if(!Lexical2DOrder(v[i],v[i+1])) {
LOG4CXX_INFO(KrisLibrary::logger(),"Not in lexical order!");
LOG4CXX_INFO(KrisLibrary::logger(),v[i]<<" -> "<<v[i+1]);
return false;
}
}
if(IsNaN(v.back().x) || IsNaN(v.back().y)) { LOG4CXX_INFO(KrisLibrary::logger(),"NaN!"); return false; }
return true;
}
void XMonotoneChain::upperEnvelope(const XMonotoneChain& e)
{
const vector<Vector2>& w=e.v;
if(v.empty()) {
v=w;
return;
}
if(w.empty()) { //no change to upper envelope
return;
}
assert(isValid() && e.isValid());
assert(v.size()>=2);
assert(w.size()>=2);
//make sure x-ranges intersect
assert(v.front().x < w.back().x);
assert(w.front().x < v.back().x);
//sweep line from 0 to 2pi
//status is s1,s2, the segment that we're currently on
//event points only at segment points
Vector2 p;
Segment2D s1,s2;
Real y1,y2;
//first point intersection with x=0
s1.a=v[0];
s2.a=w[0];
s1.b=v[1];
s2.b=w[1];
int i1=1,i2=1;
enum {Seg1,Seg2};
int nextEventPoint;
vector<Vector2> z; z.reserve(v.size()+w.size());
#define ADDPOINT(x) { \
if(z.empty()) z.push_back(x); \
else if(!z.back().isEqual(x,Epsilon)) { \
if(!Lexical2DOrder(z.back(),x)) { \
LOG4CXX_FATAL(KrisLibrary::logger(),"Out of order addition to z"); \
LOG4CXX_FATAL(KrisLibrary::logger(),z.back()<<", "<<x); \
} \
assert(Lexical2DOrder(z.back(),x)); \
z.push_back(x); \
}\
}
//eat up first segs
if(Lexical2DOrder(s1.a,s2.a)) {
while(Lexical2DOrder(s1.b,s2.a)) {
ADDPOINT(s1.a);
i1++; s1.a=s1.b; s1.b=v[i1];
assert(i1 < (int)v.size());
}
nextEventPoint=Seg2;
}
else {
while(Lexical2DOrder(s2.b,s1.a)) {
ADDPOINT(s2.a);
i2++; s2.a=s2.b; s2.b=w[i2];
assert(i2 < (int)w.size());
}
nextEventPoint=Seg1;
}
bool done=false;
while(!done) {
if(nextEventPoint == Seg1) {
assert(!Lexical2DOrder(s1.a,s2.a) && !Lexical2DOrder(s2.b,s1.a));
y1=s1.a.y;
y2=eval_y(s2,s1.a.x);
if(y1>=y2) {
ADDPOINT(s1.a);
}
}
else if(nextEventPoint == Seg2) {
assert(!Lexical2DOrder(s2.a,s1.a) && !Lexical2DOrder(s1.b,s2.a));
y1=eval_y(s1,s2.a.x);
y2=s2.a.y;
if(y2>=y1) {
ADDPOINT(s2.a);
}
}
if(s1.intersects(s2,p)) {
if(!p.isEqual(s1.a,Epsilon) &&
!p.isEqual(s1.b,Epsilon) &&
!p.isEqual(s2.a,Epsilon) &&
!p.isEqual(s2.b,Epsilon)) {
if(Lexical2DOrder(s1.a,p) &&
Lexical2DOrder(p,s1.b) &&
Lexical2DOrder(s2.a,p) &&
Lexical2DOrder(p,s2.b)) {
ADDPOINT(p);
}
else {
LOG4CXX_FATAL(KrisLibrary::logger(),"intersection point "<<p<<" violates the order: ");
LOG4CXX_FATAL(KrisLibrary::logger(),s1.a<<" -> "<<s1.b);
LOG4CXX_FATAL(KrisLibrary::logger(),s2.a<<" -> "<<s2.b);
abort();
}
}
}
//what's the next event point? either s1.b or s2.b
if(Lexical2DOrder(s1.b,s2.b)) {
//increment seg1
i1++;
if(i1 >= (int)v.size())
done=true;
else {
s1.a=s1.b; s1.b = v[i1];
}
nextEventPoint=Seg1;
}
else {
//increment seg2
i2++;
if(i2 >= (int)w.size())
done=true;
else {
s2.a=s2.b; s2.b = w[i2];
}
nextEventPoint=Seg2;
}
}
assert(i1 == (int)v.size() || i2 == (int)w.size());
//append remaining edges to edge list
if(i1 == (int)v.size()) {
//we still have the last point of v to take care of
assert(Lexical2DOrder(s1.b,w[i2]));
y1=s1.b.y;
y2=eval_y(s2,s1.b.x);
if(y1>=y2) {
ADDPOINT(s1.b);
}
//fill out the rest of the chain with w
while(i2 < (int)w.size()) {
ADDPOINT(w[i2]);
i2++;
}
}
else {
//we still have the last point of v to take care of
assert(!Lexical2DOrder(v[i1],s2.b));
y1=eval_y(s1,s2.b.x);
y2=s2.b.y;
if(y2>=y1) {
ADDPOINT(s2.b);
}
//fill out the rest of the chain with v
while(i1 < (int)v.size()) {
ADDPOINT(v[i1]);
i1++;
}
}
/*
//ERROR CHECKING
XMonotoneChain f;
f.v=z;
assert(f.isValid());
for(size_t i=0;i<z.size();i++) {
Real x=z[i].x;
if(x >= v.front().x && x <= v.back().x) {
y1=f.eval(x); y2=eval(x);
if(!(y1+0.001 >= y2)) {
LOG4CXX_ERROR(KrisLibrary::logger(),"Error in MonotoneChain.upperEnvelope()!");
LOG4CXX_INFO(KrisLibrary::logger(),y1<<" < "<<y2);
}
assert(y1+0.001 >= y2);
}
if(x >= w.front().x && x <= w.back().x) {
y1=f.eval(x); y2=e.eval(x);
if(!(y1+0.001 >= y2)) {
LOG4CXX_ERROR(KrisLibrary::logger(),"Error in MonotoneChain.upperEnvelope()!");
LOG4CXX_INFO(KrisLibrary::logger(),y1<<" < "<<y2);
}
assert(y1+0.001 >= y2);
}
}
*/
v = z;
}
Real XMonotoneChain::minimum(Real a,Real b,Real* x)
{
assert(a <= b);
Real xmin=a;
Real ymin=eval(a);
//could do this in log(n)+k time, but who cares...
for(size_t i=0;i<v.size();i++) {
if(v[i].x >= a && v[i].x <= b) {
if(v[i].y < ymin) {
xmin = v[i].x;
ymin = v[i].y;
}
}
}
Real ytemp=eval(b);
if(ytemp < ymin) { xmin=b; ymin=ytemp; }
if(x) *x=xmin;
return ymin;
}
void XMonotoneChain::SelfTest()
{
XMonotoneChain c1,c2;
c1.v.resize(2);
c2.v.resize(2);
c1.v[0].set(0,-1);
c1.v[1].set(2,1);
c2.v[0].set(0,1);
c2.v[1].set(2,-1);
c1.upperEnvelope(c2);
for(size_t i=0;i<c1.v.size();i++) {
LOG4CXX_INFO(KrisLibrary::logger(),c1.v[i]<<", ");
}
KrisLibrary::loggerWait();
}
|
STACK_EDU
|
The results of principal component analysis showed that hematology of the uludað frog, rana macrocnemis (2008) reptile hematology veterinary. Electric city aquarium & reptile den this amazing frog uses its has joined the team to oversee the life support systems, water quality analysis,. Introduction respiration is the process of obtaining sufficient oxygen from an organism’s environment to support its cellular metabolic requirements. See more of queensland frog society inc on facebook amphibian & reptile alannah’s research focuses on the analysis of acid frog species and the.
Reptile and amphibian are distantly related to each other hence both are ectothermic, vertebrates and have three-chambered heart they main difference between. Shipped in powder form, it is designed to be made into a gel before feeding the largest organisms found on earth can an analysis of the frog a reptile be determined. Treefrog environmental is a technical consulting services firm based in growth mitigation analysis botanical surveys, soil investigations, reptile,. Any reptile in your dreams may depict your surface feeling reactions to that creature, such as like or dislike, attraction or repulsion but in many cases reptiles.
Interbreeding of water frog species of the more commonly found frog species within the british isles, there are two groups: brown frogs, including the common frog. Mazuri® amphibian & carnivorous reptile gel is designed to be an essential part of a total 1 based on the latest ingredient analysis information. Underground reptiles review which reviews the staff, if you are planning on getting a reptile or frog from underground underground reptiles review analysis. This paper is a summarized explanation and a comparative analysis of dissection on three different species: a cat, a snake, and a frog as stated on the sites http. Don’t have time to analyse your own anabat data our fast, inhouse service can do it for you.Amphibians questions including is an axolotl an amphibian or reptile and how do frogs adapt go a leopard frog. Thematic units - reptiles & amphibians it introduces the life cycle of the frog the national audubon society field guide to north american reptile and. Legislative protection for the uk’s the pool frog reintroduction project forms part of execution and analysis of amphibian and reptile surveys for a. A pacman frog is also known as an ornate horn frog reptile food & care food ingredients & guaranteed analysis warranty.
Home all categories tools measurement & analysis -50-70c high precision pet electronic thermometer lizard tortoise frog reptile breeding box dedicat digital. In the 13th century the category of reptile was recognized in europe as consisting of a miscellany of egg-laying creatures, including snakes, various fantastic. Explore fun and engaging reptile home » themes » animals » reptile and amphibians activities & fun ideas for kids reptile and amphibians activities & fun.
6-10-2017 the 2017 ig nobel prizes an analysis of the frog a reptile were awarded on thursday night, september 14, 2017 at the 27th first annual ig nobel prize. Genetic analysis of biopsy samples, if you have made any observation of a lizard or frog, fill out an amphibian and reptile distribution scheme card. Amphibian and reptile through the analysis of the frequency of amphibians remained high during summer months due to predation on the iberian green frog. Multi vitamin / multi vitamin powder supplement most reptiles and amphibians are unable to obtain the essential vitamins and trace minerals that they require from a.Download
|
OPCFW_CODE
|
Resolution: Won't Do
Windows 2016-05-04, Windows 2016-05-18
During Development and testing
MODULES-2634 it was found the beaker-rspec was causing a whole host of issues with readability and tests
- The existence of beaker-rspec was causing a lot of confusion in this
test suite, making it difficult to determine how to invoke Beaker
- Beaker-rspec also appeard to expect that there be -database and
-dashboard roles defined in the node defintions, else it would
error during initial provisioning. Since this test suite does not
require a master, and is significantly slowed with a master present,
remove that requirement and update the node definitions accordingly.
- Beaker-rspec also had an expectation that the BEAKER_setfile env
var was always set (even if trying to use bundle exec beaker
--config at the command line), making for a painful user experience
when trying to run this job.
- Update the node definitions to use PE 2016.1.1 instead of PE 3.2
as the default PE version selected.
- Default the PUPPET_INSTALL_TYPE environment variable to 'agent'
so that this value does not have to be specified directly in
default Beaker command line invocation.
- Remove spec_helper_acceptance and move to a standard Beaker style
pre-suite setup by splitting it into spec/acceptance/setup, files
00-install_puppet.rb and 01_install_module.rb. Continue to use
the `run_puppet_install_helper` command to bootstrap initial agent
install while moving to `install_dev_puppet_module_on` to
bootstrap the module installation on agent nodes. This supports
both local testing / dev, and fake forge usage in the pipeline.
- Ignore the local ./tmp directory which Beaker uses to stange
the Windows installer in.
- Take all of the individual tests in exec_powershell_spec.rb, and
move them into spec/acceptance/tests as individual files. Since
the pre-suite lives inside the spec/acceptance/setup directory
now, --tests spec/acceptance can longer be passed to Beaker as it
would run the pre-suite a second time. This allows for
--pre-suite spec/acceptance/tests to be passed in the invocation
- Remove all rspec concepts - describe / it / expect and shared
examples, rewriting these constructs in vanilla Beaker. Move
individual test teardown (mostly removing files, but in one case
removing global env vars) to the individual tests rather than at
the suite level.
- Fixed the test that verifies pre-existing environment variables
continue to be available in their original state, despite
modifications in manifests. The test, based on envar_ext_test_pp,
had mulitple problems: it was invoking the wrong manifest, and was
manipulating env vars improperly to demonstrate the desired behavior.
- This test setup now allows for individual tests to be exercised
via the command line much more easily
- relates to
MODULES-3280 Powershell - Remove Verbose Environment Variable Setting
MODULES-2634 PowerShell Module doesn't run template with try/catch
- links to
|
OPCFW_CODE
|
Help migrating from Ipcop 1.4.21
I have started the migration process from Ipcop 1.4.21 to pfsense 2.2.
4 nic cards:
Red (connected to bridged modem)
Green (connected to LAN switch)
Blue (Connected to wireless AP)
Orange (connected to internal DMZ servers)
Domain registered thru no-ip (let's use mydomain.com as an example here).
I have recreated the 4 networks on pfsense:
LAN (DHCP 192.168.1.1/24 with DHCP range from 192.168.1.200 to 192.168.1.25)
Blue (192.168.2.1 connected to a Netgear router configured as an AP 192.168.2.2)
Orange (192.168.3.1 connected to a Web Server 192.168.3.3, SIP Server 192.168.3.5 etc)
I started the pfsense box and from my laptop (192.168.1.74) I can ping both my AP and the web server (192.168.2.2 and 192.168.3.3)
Next I created firewall rules (see attached).
When I point my browsed from my laptop to https://www.mydomain.com, I get a 404 error message. if i issue a tracert mydomain.com, it points to my external ip address.
Similarly, none of my SIP phones are connecting. I have created a firewall rule on WAN forwarding UDP 5060 and 10000-20000 to my 192.168.3.5 (SIP server)
What am I doing wrong?
"When I point my browsed from my laptop to https://www.mydomain.com, I get a 404 error message. if i issue a tracert mydomain.com, it points to my external ip address."
"Similarly, none of my SIP phones are connecting. I have created a firewall rule on WAN forwarding UDP 5060 and 10000-20000 to my 192.168.3.5 (SIP server)"
Are your sip phones and the sip server both behind the SAME pfsense?
Is https://www.mydomain.com a site running behind the same pfsense you are trying to connect from?
Yes, the phones are all connected to 192.168.1.X (LAN Network) while Elastix server is connected to the Orange network card (192.168.3.X)
If all of your phones and your server are on the LANs side of pfsense, you don't need any sip rules on the WAN. None.
Is there anything OUTSIDE your pfsense network that is using your elastix server? Phone? Video? Audio?
Are you pointing the SIP phones at the local LAN IP of the server or at some domain name or public IP?
This is just me… However.
If I had an elastix server (I do have something like that) and ALL of my phones and other clients to that server were inside my network, I would not have any rules on my wan at all related to elastix. Also, I would put my elastix server on the same subnet as my clients just to make things easy unless you feel a need to have access to elsatix firewalled off from the LAN. Even if I decided to put my elastix box on a seperate subnet, I would no DMZ it. Why bother unless you have external clients?
Thanks again for your reply.
Perhaps this pics will help clarify.
I have Sip Phones connected the LAN interface and I also have remote phones which would be connecting thru the WAN.
In both scenarios, all phones have mydomain.com in the domain setting.
Hope this helps clarify.
How many remote phones are out there? Are they at many sites?
On your pfsense you will need a domain override to point to the local address of your server.
I fixed remote site and my laptop also have a SIP softphone which I use for my travel.
How do I enable the "domain override"? Sorry for the dumb question :)
I'd set up VPN at the remote site just for the sip and laptop also. Then close all those forwarded ports. This will 100% eliminate NAT issues and make things far more secure.
With a sip server, you can end up fighting with NAT for ages. A good UDP VPN server will fix you right up.
as far as domain overrides, what are you using for DNS?
pfsense is getting the default DNS servers from Verizon
i.e. 220.127.116.11 and 18.104.22.168
Try Services: DNS forwarder
Then in there at bottom, Host overrides / domain overrides.
You can use this to make your things resolve to a internal local IP (sip server IP for example), instead of the public IP.
Me personally, I just use IPs directly at the SIP device instead of relying on DNS.
|
OPCFW_CODE
|
Typos in the utils.py
Lines 31 and 32 in kan/utils.py have the wrong degree of order, it should be 4 and 5, respectively, instead of 3.
these numbers are complexities. one can choose their favorite numbers. For example, I think x^3 and x^4 are equally complex, so I assign complexity 3 to both functions. However, if you think x^4 is more complicated than x^3, you can assign 4 to x^4 while assign 3 to x^3.
these numbers are complexities. one can choose their favorite numbers. For example, I think x^3 and x^4 are equally complex, so I assign complexity 3 to both functions. However, if you think x^4 is more complicated than x^3, you can assign 4 to x^4 and assign 3 to x^3.
Thanks for the explanation!
these numbers are complexities. one can choose their favorite numbers. For example, I think x^3 and x^4 are equally complex, so I assign complexity 3 to both functions. However, if you think x^4 is more complicated than x^3, you can assign 4 to x^4 and assign 3 to x^3.
Hi Ziming,
I was trying to recover L-J potential:
$V_{LJ}(r) = 4 \varepsilon [(\frac {\sigma} {r})^{12} - (\frac {\sigma} {r})^6]$
where r is the distance between two points and \sigma and \epsilon are arbitrary positive floats. It seems a width=[1, 2, 1] KAN is not able to recover the accurate formula though I tried to fix the symbolic form to x^6 and x^12 or 1/x^6 and 1/x^12 (and this is why I noticed the formula details in utils.py :)).
Could you please give any suggestions on this kind of formula?
It seems a width=[1, 2, 1] KAN is not able to recover the accurate formula though I tried to fix the symbolic form to x^6 and x^12 or 1/x^6 and 1/x^12
This sounds about right, but to make sure: fix the two activation functions (0,0,0) and (0,0,1) to be linear, (1,0,0) to 1/x^6 and (1,1,0) to 1/x^12, right? And probably you want fit_params=False.
I think this should work if done right. :)
fix the two activation functions (0,0,0) and (0,0,1) to be linear, (1,0,0) to 1/x^6 and (1,1,0) to 1/x^12, right? And probably you want fit_params=False.
Hi Ziming,
Thanks for your prompt reply.
Yes, I set fit_params = False. But I didn't constrain all the nodes to be symbolic (I wonder if that will be identical to scipy.optimize.curve_fit).
To be more specific, here is my scratch code:
from kan import MultKAN as KAN
import numpy as np
import matplotlib.pyplot as plt
import torch
torch.set_default_dtype(torch.float64)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from kan.utils import create_dataset
eps = 1.0
sigma = 1
f = lambda x: eps*((sigma/x)**12 - (sigma/x)**6)
dataset = create_dataset(f, n_var=1, train_num=1000, test_num=1000, ranges=[0.8, 6], device=device)
dataset['train_input'].shape, dataset['train_label'].shape
xi = np.linspace(1, 6, 1000)
xi_gpu = torch.tensor(xi.reshape(1000, 1)).to('cuda')
plt.plot(xi, f(xi), label='true')
plt.scatter(dataset['train_input'].cpu(), dataset['train_label'].cpu(), label='train_set')
plt.scatter(dataset['test_input'].cpu(), dataset['test_label'].cpu(), label='test_set')
plt.legend()
plt.show()
# Add symbolic
from kan.utils import add_symbolic
f_inv6 = lambda x, y_th: ((x_th := 1/y_th**(1/6)), y_th/x_th*x * (torch.abs(x) < x_th) + torch.nan_to_num(1/x**6) * (torch.abs(x) >= x_th))
add_symbolic('1/x^6', lambda x: 1/x**6, c=6, fun_singularity=f_inv6)
f_inv12 = lambda x, y_th: ((x_th := 1/y_th**(1/12)), y_th/x_th*x * (torch.abs(x) < x_th) + torch.nan_to_num(1/x**12) * (torch.abs(x) >= x_th))
add_symbolic('1/x^12', lambda x: 1/x**12, c=12, fun_singularity=f_inv12)
model = KAN(width=[1, 2, 1], grid=50, k=12, seed=12345, device=device)
model.fix_symbolic(0,0,0,'x^12', fit_params_bool=False);
model.fix_symbolic(0,0,1,'x^6', fit_params_bool=False);
# model.fix_symbolic(1,0,0,'x', fit_params_bool=False);
# model.fix_symbolic(1,1,0,'x', fit_params_bool=False);
# train the model
history = model.fit(dataset, opt="LBFGS", steps=100, lamb=0.001, lamb_entropy=10, lr=0.1)
xi = np.linspace(0.8, 6, 1000)
xi_gpu = torch.tensor(xi.reshape(1000, 1)).to('cuda')
pred = model(xi_gpu)
plt.plot(xi, f(xi), label='true')
plt.plot(xi, pred.detach().cpu().numpy(), label='pred')
plt.legend()
plt.tight_layout()
plt.show()
|
GITHUB_ARCHIVE
|
Another month has passed, so it's time for another Simply Explained newsletter. This might be the longest one yet, filled with cool things I found on the internet.
Before you dive in, I want to tell you why I write this newsletter. I'm a curious person, and I'm interested in a lot of areas. My life's slogan is "while alive, keep learning.""
Through this newsletter, I want to share my excitement and curiosity with you. I hope I can "infect" you (poor choice of words right now?) with the same passion for science, technology, and anything else that's even remotely interesting.
I hope you'll enjoy this newsletter, and if you did, feel free to share it with friends and family. You can also reply to this email if you have feedback or suggestions.
Enjoy the week, keep it safe, and stay curious,
👨🏫 Simply Explained
My YouTube channel has been attracting a lot of spammers. They try to trick people into investing money with them or claim to recover lost Bitcoin wallet keys.
I've been marking these as spam, but YouTube's spam filter doesn't seem to update itself. So time to take matters into my own hands and build a spam filter myself (with TensorFlow and the YouTube Data API).
I'm generating a lot of digital data. Constantly taking photos, making videos, coding projects, etc. I'm storing all these files on Google Drive, which has been very reliable but also a bit risky. What if Google closes my account? Or loses my files?
To back up my files, I built a NAS (Network Attached Storage) from an old enterprise server. The ultimate Synology-killer!
🤓 Cool Stuff I Found on the Internet
Want to spy on your neighbors? Just look at their light bulbs and observe the tiny vibrations that sound creates on the glass surface. That's it! All you need is a clear line of sight. Wow!
This article made me think about a video from Smarter Every Day where they used a laser to trigger the microphone on Google Assistant and Alexa devices. By pulsating the laser, they could give it commands such as "open the front door."
A frozen microscopic worm found in Siberian permafrost has woken up after being thawed in a lab. It was perfectly fine and able to reproduce after all these years.
Why is this important? These creatures are very resistant to radiation and can withstand very harsh conditions (drying, starvation, and low oxygen). Understanding how they do this can help us with deep-space travel, cryo-preservation of cells and organs, and much more.
Also, it makes me think: how many other (unknown) organisms are frozen in the Arctic and waiting to be revived by global warming?
Can we live forever? And if not, why not? Is there a definite limit on how long we can live? It turns out there is a limit on many times our cells can divide, called the Hayflick limit. This limits our lifespan to around 120 years. But maybe we can re-engineer our cells to extend the Hayflick limit?
It also reminded me of a great Vsauce video: "Should I Die?"
During their Developer Conference, Apple announced "Mail Privacy Protection." This feature will block invisible tracking pixels that keep track of how many times you open an email.
Why is this a big topic? Apple's Mail app is used by millions of people on macOS and iOS devices. This has a massive impact on newsletters that rely on these statistics for sponsorships.
As for this newsletter: I don't care too much about opening rates. My mail provider (Revue) does keep track of this, but I don't look at these metrics. This newsletter is tiny, and the goal is not to make money but to share my excitement and curiosity with others.
Code stylometry tries to train a computer to recognize the coding style of an individual. I came across this during Lex Fridman's interview with Charles Hoskinson. Charles suggested using code stylometry to identify Satoshi Nakamoto. Take the original Bitcoin code, and compare the coding style to public GitHub repositories.
Interesting piece about how meetings are conducted at Amazon. Most meetings start with participants reading a short document containing an idea or a problem to be solved (including numbers, charts, …)
Not only has this many positive effects on the meeting itself, but it also creates a written track record of ideas, decisions, and problems.
One of my favorite YouTube channels posted a video about how the reign of the dinosaurs ended. It also puts the reign of the human race into perspective. We haven't been around for very long (certainly not compared to the dinosaurs), and yet we've completely transformed our world.
A friend of mine suggested this: a microscope built out of Lego and old iPhone camera parts! It aims to get kids interested in science, and it actually works incredibly well.
To reduce our carbon footprint, we're massively switching to renewable energy sources. The problem with those is that they're not always available. The sun doesn't always shine, and the wind doesn't always blow.
Storing energy is going to be crucial, and batteries are playing a role in that. This article looks at the price evolution of batteries since 1991.
This article proposes that we don't need batteries to store energy. Instead, we have to shift our electricity consumption to reduce the load on the grid.
For instance: instead of running water heaters on a dumb timer, connect them to a "smart grid" to signal them when there is an excess of electricity. They can then consume that electricity, heat your water and prevent the heater from using power during peak hours.
I would like to see power companies build APIs that we can integrate with systems like Home Assistant. That way, I could trigger my "dumb" water heater with a "smart" switch.
Electric cars seem to be the way forward. As a result, many governments are pushing the adoption. But how eco-friendly is an electric car anyway?
Polestar has published a Life Cycle Assessment (LCA) in which they look at the lifetime carbon footprint of their car.
In a nutshell: when only using clean energy, you need to drive 50,000km to offset the carbon footprint of the car (production costs, transportation, batteries, etc.) After this mileage, you're saving the environment. It's not a small amount, but it's doable.
I'm still very fascinated by the pandemic and everything around it. I still have many questions. How did the coronavirus come to be? How do the vaccines work? What practical measures can we take? etc.
I realize that not everyone shares this curiosity, so feel free to skip this section if you're tired of all the corona-related news.
Governments have taken a lot of (unpopular) measures to stop the pandemic. Could you do any better?
This "Corona Game" is a simulator that puts you in the driving seat. You're in charge of the Czech Republic, and you can take any action you'd like to prevent COVID19 from spreading. For example, close schools, close the border, limit events, mandate masks, etc.
Try to limit infections and deaths, limit government debt, and keep your population happy at the same time.
At the end of the game, you can see how well you did compared to other players and compared to the actual Czech government. There's also a page explaining the models and methodology behind this simulation.
This article shines a light on how coronaviruses are being studied. To my surprise, laboratories routinely create new viruses, called "chimeras." Researchers take certain parts of one virus (such as the Spike protein), fuse it together with another virus, and see if it could replicate in human cells.
This type of research aims to find out how likely it is that a virus could jump species and to create a universal vaccine against the family of coronaviruses.
We still don't know where COVID19 originated, but it's fascinating to see the kinds of research we're doing to try to predict and prevent pandemics.
While the previous article suggests that a lab leak could be real, this article pushes the opposite idea. The coronavirus grew in bats and made the jump to humans.
Why haven't we then found the source yet? Well, it took us 15 years to trace SARS back to bats. Same thing for Ebola: we're pretty confident that it came from bats, but we have yet to find carrying with the virus.
The article also says that we don't know where the virus first started spreading. The media often calls out Wuhan, but there had been earlier cases hundreds of kilometers away.
By rooting through files stored on Google Cloud, a researcher says he recovered 13 early coronavirus sequences that had disappeared from a database last year.
It's believed that the coronavirus jumped from bats to humans at Wuhan's Seafood Market. However, these sequences turned out to be more distantly related to the bat variant. This indicates that Wuhan might not have been the origin but rather the first super-spreading event.
Wow, you've made it all the way to the end. Thank you so much!
Have any feedback about this newsletter? Let me know by replying to this email.
Have a good week!
|
OPCFW_CODE
|
import { globalUni } from '@rng/global-rng';
import { cl, select } from '@common/debug-mangos-select';
const rnbinomDomainWarns = select('rnbinom')("argument out of domain in '%s'");
const rnbinomMuDomainWarns = select('rnbinom_mu')("argument out of domain in '%s'");
rnbinomDomainWarns;
//rnbinomMuDomainWarns;
import { RNGkind, setSeed } from '@rng/global-rng';
import { rnbinom } from '..';
describe('rnbinom', function () {
describe('invalid input', () => {
expect(() => rnbinom(1, 10, undefined, undefined)).toThrowError('argument "prob" is missing, with no default');
expect(() => rnbinom(1, 10, 5, 6)).toThrowError('"prob" and "mu" both specified');
});
describe('using prob, not "mu" parameter', () => {
beforeEach(() => {
cl.clear('rnbinom');
cl.clear('rnbinom_mu');
globalUni().init(97865);
});
it('n=10, size=4, prob=0.5', () => {
const r = rnbinom(10, 4, 0.5);
expect(r).toEqualFloatingPointBinary([4, 8, 3, 5, 4, 3, 6, 4, 2, 5]);
});
it('n=10, size=400E+3, prob=0.5', () => {
const r = rnbinom(10, 400e3, 0.5);
expect(r).toEqualFloatingPointBinary([
400308, 401016, 399030, 399988, 399968, 400430, 401002, 399588, 398948, 399601
]);
});
it('n=1, size=Infinity, prob=0.5', () => {
const nan = rnbinom(10, Infinity, 0.5);
expect(nan).toEqualFloatingPointBinary(NaN);
});
it('n=1, size=1, prob=1', () => {
const z = rnbinom(2, 1, 1);
expect(z).toEqualFloatingPointBinary(0);
});
it('n=1, size=1, prob=1', () => {
RNGkind({ uniform: 'SUPER_DUPER', normal: 'BOX_MULLER' });
setSeed(1234);
const z = rnbinom(10, 8, 0.2, undefined);
expect(z).toEqualFloatingPointBinary([21, 39, 44, 20, 26, 42, 59, 23, 22, 35]);
});
});
describe('using mu, not "prob" parameter', () => {
beforeEach(() => {
cl.clear('rnbinom');
cl.clear('rnbinom_mu');
});
it('n=10, size=8, mu=12 (prob=0.6)', () => {
RNGkind({ uniform: 'SUPER_DUPER', normal: 'BOX_MULLER' });
setSeed(1234);
const z = rnbinom(10, 8, undefined, 12);
expect(z).toEqualFloatingPointBinary([10, 10, 17, 6, 9, 14, 10, 12, 3, 5]);
});
it('(check M.E.)n=1, size=8, mu=NaN', () => {
const nan = rnbinom(1, 8, undefined, NaN);
expect(nan).toEqualFloatingPointBinary(NaN);
expect(rnbinomMuDomainWarns()).toHaveLength(1);
});
it('n=1, size=8, mu=0', () => {
const z = rnbinom(1, 8, undefined, 0);
expect(z).toEqualFloatingPointBinary(0);
});
});
});
|
STACK_EDU
|
IBM DB2 VERSIONS DRIVER DETAILS:
|File Size:||4.8 MB|
|Supported systems:||All Windows 32bit/64bit and Mac OS|
|Price:||Free* (*Free Registration Required)|
IBM DB2 VERSIONS DRIVER (ibm_db2_1297.zip)
IBM WCS, IBM WEBSPHERE COMMERCE, Dataload Utility.
Mxodbc connect server is compatible with the ibm db2 odbc drivers. Deploy ibm db2 purescale on azure 8 share data among the multiple virtual machines that run the db2 purescale engine. High availability of ibm db2 luw on azure vms on red hat enterprise linux server. Although db2 was initially designed to work exclusively on ibm mainframe. As tom v's answer notes, db2level is the simplest means of learning the version of a db2 instance, but there are a couple of issues with it, firstly that one must have shell access to the server, and secondly, one must be careful that the appropriate db2profile environment is sourced when running db2level it's entirely possible, even common in my experience, to have multiple versions of db2. I've tried downloading a trial of zend server 9.1 and using the bundled php ibm file in a non zend server wamp stack, but no luck both for 32 and 64 bit versions . Using ibm db2 client software vendor client software must be in the folder specified in the environment variable path for windows , libpath for aix , or ld library path for linux and solaris . 24 minutes to read, in this article.
It brings performance and synergy with the ibm system z hardware and opportunities to drive business value in the following areas. There are few ways to check db2 edition and installed features. 2.04.2012 sensing a change in the way customers store and analyze data, ibm has updated its flagship db2 relational database management software to handle a wider range of data processing duties. Ibm s db2 12 for z/os continues a pattern of new versions concentrating on substantial performance improvements coupled with reductions in resource requirements. To get the information from remote, we have to use the sysibmadm view. Ibm db2 9.0 review by david mcamis in data management on j, 9, 42 am pst db2 9.0 has a lot for the newcomer or seasoned hand alike.
Ibm i offers an upgrade path so that application software written for previous operating systems on ibm system i can be migrated to current supported hardware without needing to be modified or recompiled. It was initially designed to run on ibm mainframe platform only. Returns result set metadata for parameterized statements that have been prepared but not yet executed. BR Kashif Khan Following.
Is the frequent file name to indicate the ibm db2 content manager client for windows installer. Supports parameter arrays, processing the arrays as a series of executions, one execution for each row in the array. All about ibm db2 database and its features. Db2 is a family of relational database management system rdbms products from ibm that serve a number of different operating system platforms. Download db2 express c for free. 3 on windows 64-bit operating systems, 32-bit versions of rexx are supported on 64-bit db2 instances only for db2 version 9.5 fix pack 5 and later, and db2 version 9.7 fix pack 1 and later.
This application can support numerous db2 objects as well as all db2 data types. When the value in each option is being set, some servers might not handle the entire length provided and might truncate the value. View and download ibm db2 manual online. Ibm db2 is a family of hybrid data management products offering a complete suite of ai-empowered capabilities designed to help you manage both structured and unstructured data on premises as well as in private and public cloud environments. Cve-2017-1452 , ibm db2 for linux, unix and windows 9.7, 10,1, 10.5, and 11.1 includes db2 connect server could allow a local user to obtain elevated privilege and overwrite db2 files. Z/os v2.1 with ibm db2 11 for z/os 5615-db2 running on zec12 or zbc12, or later, systems with cflevel 18 is planned to exploit new function to allow batched updates to be written directly to disk without being cached in the coupling facility in a parallel sysplex. Smart Gesture. Use of this information constitutes acceptance for use in an as is condition.
This product stemmed from two earlier products, db2 common server version 2 and db2 parallel edition. Ibm db2 is a next generation data platform for transactional and analytical operations. Db2 database formerly known as db2 for linux, unix and windows is a database server product developed by known as db2 luw for brevity, it is part of the db2 family of database products. This solution is designed to manage access to your enterprise information wherever it is stored. The following version, 5.1 is the most frequently downloaded one by the program users.
Following is my code to connect python to db2. Db2 management, tutorials, scripts, coding, programming and tips for database administrators db2. Db2 version 9.7 for linux, unix, and windows. Via native tcp/ip, or to the sna environments of that or other db2 versions. For ibm db2, the udb current version is 10.5 with the features of blu acceleration and its code name as 'kepler'. DOWNLOAD GRÁTIS PILOTE SD CARD DELL INSPIRON 3521 PARA WINDOWS. Db2 10.0.5.5 the ibm data server provider for.net extends database server support for the interface. The db2 12 for z/os technical overview ibm redbooks publication introduces the enhancements made available with db2 12 for z/os.
Websphere commerce also known as wcs websphere commerce suite is a software platform framework for e-commerce, including marketing, sales, customer and order processing functionality in a tailorable, integrated is a single, unified platform which offers the ability to do business directly with consumers , with businesses , indirectly through channel partners indirect business. Cve-2019-4154 , ibm db2 for linux, unix and windows includes db2 connect server 9.7, 10.1, 10.5, and 11.1 is vulnerable to a buffer overflow, which could allow an authenticated local attacker to execute arbitrary code on the system as root. Pages in category ibm db2 the following 10 pages are in this category, out of 10 total. Whereas, for the other editions, the users can download the corresponding fixed editions, db2 v9.7 fp11, v10.1 fp6, and v10.5 fp10. Ibm db2 version 10.5 for linux, unix, and windows offers accelerated analytic processing by introducing a new processing paradigm and data format within the db2 database product. Technical sessions and hands-on labs from ibm and red hat experts.
Os/2 is a series of computer operating systems, initially created by microsoft and ibm under the leadership of ibm software designer ed iacobucci. This page answered that question for me. Database 2 db2 for linux, unix, and windows is a data server developed by ibm. I want to connect python to db2 version 9.1 using ibm db2 odbc driver.
New in ibm db2 express-c 10.5.100.64, ibm db2 version 10.5 for linux, unix, and windows offers accelerated analytic processing by introducing a new processing paradigm and data format within. The ibm db2 database runs a long history. The latest version is 9.7.1 and it was updated on 2019-06-14 19, 42, 15. The db2 connector includes a microsoft client that communicates with remote db2 servers across a tcp/ip network. Class Augusta. What is db2 ibm db2 is a family of related data management products, including relational database servers, developed and marketed by ibm. Db2 tutorial 1 this chapter describes history of db2, its versions, editions and their respective features. Db2 fix pack images are delivered from fix central.
Since year 1990, it decided to develop a universal database udb db2 server, which can run on any authoritative operating systems such as linux, unix, and windows. 2.2.2 creating a new db2 database using the 'db2cc' utility. Included are links for db2 universal fix pack, db2 server fix pack, db2 connect, net search extender, spatial extender, query patroller, db2 client, db2 run-time client, db2 wrappers, and all drivers odbc, cli, jdbc,.net . These instructions should work with older ubuntu versions, also. Sanders shruthi subbaiah machimada published on octo this guide will help you download and install ibm db2 developer-c software on ubuntu linux, and then create a sample database. Answer , db2 is a subsystem of the mvs operating system.
If this problem persists, contact ibm support. For more information, see fp4, driver versions enhancements. Translated db2 udb v8 product manuals in pdf format and other db2 versions english and translated are also available. Usually this is considered a restricted license. Spatial support for db2 for z/os user's guide and reference for db2 10 db2 sql performance analyzer for z/os. My initial question was what version do i use? Running an sap system on ibm db2 11.1 with the db2 purescale feature. It is a database management system dbms for that operating system.
Your entitlement to use db2 may be restricted in some way, and the product documentation should tell you what edition of db2 you get. The advent of relatively low-cost real memory allows it support to balance resource constraints, perhaps mitigating cpu, i/o or elapsed time issues by increasing real memory. For linux and unix operating systems, if the endianness big endian or little endian of the backup and restore platforms is the same, you can restore backups that were produced on the earlier versions.
|
OPCFW_CODE
|