Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
|02 Jan 2012||#1|
Office 2010 Activation Problem on Newly installed Windows 7
Just a bit of background: I recently upgraded my fiancée's fathers compaq laptop from Vista to Windows 7 Professional with the student upgrade package offer. In doing I chose a custom install and the old files are still stored on the computer.
The problem came when I attempted to install Office Academic 2010 back onto the laptop. This version of Office was another purchase using student deals so the product key could be used on 2 different computers (my fiencées laptop and her dads laptop). The laptop previously had the same version of Office before the upgrade to windows 7. However, now windows 7 is loaded, when I try to install Office 2010 it claims that the Office key is incorrect and refuses to accept it and install.
I'm usually ok with sorting these kinds of things out but i'm pretty stumped as to where to go with this and the Microsoft Office site is...shall we say less than helpful.
Thanks for any help!
P.S If this is already in another thread i'm happy to accept a re-direction, but I couldn't find anything of this nature.
|My System Specs|
|23 Jan 2012||#3|
First of all I apologise for the late reply. I have been rather busy lately and only just sorted this out.
Secondly I found that the code was wrong. I must have been using the one I used to activate the version of Windows 7. I used ProduKey ti find the key form the other computer that Office was installed upon, transfered it across and it worked first time.
Thanks for the advice anyway. I'm going to save that number because I have a friend who's computer's code isn't activating so that number will come in handy.
|My System Specs|
|Similar help and support threads for2: Office 2010 Activation Problem on Newly installed Windows 7|
|receiving Office 2007 updates but have got Office 2010 installed||Microsoft Office|
|Problem with Microsoft Office 2010 - activation error code 0xc004f074||Windows Updates & Activation|
|Receiving updates for Office 2010 but Office 2007 is installed.||Windows Updates & Activation|
|Problem with newly installed Ram, not using all of it||Hardware & Devices|
|Problem With Activation Office 2010 and running KMS||Microsoft Office|
|is it OK to install DX feb 2010 redist over my newly installed Win 7||Software|
Windows 7 Forums is an independent web site and has not been authorized, sponsored, or otherwise approved by Microsoft Corporation. "Windows 7" and related materials are trademarks of Microsoft Corp.
© Designer Media Ltd
All times are GMT -5. The time now is 06:34 PM.
|
OPCFW_CODE
|
import socket
import os
import sys
TARGET_IP = "127.0.0.1"
TARGET_PORT = 5005
BLOCK_SIZE = 1024
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
fileName = "picture.png"
fileSize = os.stat(fileName).st_size
fp = open(fileName, "rb")
payload = fp.read()
sent = 0
for i in range((len(payload)/BLOCK_SIZE) + 1):
data = []
if (i+1)*BLOCK_SIZE > len(payload):
data = payload[i*BLOCK_SIZE:len(payload)]
else:
data = payload[i*BLOCK_SIZE:(i+1)*BLOCK_SIZE]
sock.sendto(data, (TARGET_IP, TARGET_PORT))
sent += sys.getsizeof(data)
print "Sent %.0f%% with %s of data" % (float(sent)/fileSize*100.0, len(data))
fp.close()
|
STACK_EDU
|
AppUpdate BETA 3 is now available in the Android Market!
To try it out just get it from the Android Market.
AppUpdateTM is a friend-sourced app-discovery network that helps you and your friends on Facebook find and share the best Android apps.
Follow us on Twitter (@AppUpdateTweets)
Check out the April 20th, 2011 MoDevDC AppUpdate presentation video.
The server side software has been updated and now includes a number of new features:
- crossposting to Facebook now works. You can turn crossposting off and on via the Settings menu in the upper right after you log in.
- the profile view has been expanded to include all of your app recommendations and updates.
AppUpdate BETA 3 is now available from the Android Market! This is a massive update with countless improvement and a bunch of new features including:
- notifications are now working correctly.
- you can now select how often AppUpdate should check for notifications from the main screen (just press the menu button)
- when looking at the news feed or top apps list, you can now see what group you're looking at, whether friends or everyone.
- you can post comments to updates and even suggest an app to go along with a comment.
- the app info page now lets you select to see updates about that app by your friends or other public members.
- long click context menus are available on most items. (click and hold on an app or update to see the options.)
- you can now delete updates and comments by long clicking selecting delete from the menu.
- you can now unshare an app by long clicking on the app from your My Apps screen and selecting unshare app.
We've also fixed quite a number of bugs, but as always if you notice any weirdness please let us know.
For questions and comments about this release, please post in the users forum.
A nasty "force close" bug made it through our testing. If you clicked on an app shared by someone who was not a friend AppUpdate would crash. We've addressed this and done a quick release of AppUpdate Beta 2. To install it go here.
After many more months of work than we were planning, AppUpdate Beta 1 is now available! This is an OPEN BETA which means you are free to share it on with anyone you would like. For more information check out the release notes article.
If you are on your Android 2.2 mobile device just head over to the Mobile Page to install it.
We've been working on finishing up:
- a "recommend an app" feature where as part of a comment to an app status update you'll be able to include one of the apps that you have shared.
- notifications when someone posts a comment on your updates or one of your friends shares a new app. You'll be notified by email and in the app. The email notification feature can be turned off from the site.
- we've been making quite a bit of progress improving the performance of the app.
We've received the video footage from the April 20th 2011 MoDevDC presentation which I've posted in a blog article here.
AppUpdate was publically presented for the first time at the MoDevDC meetup where 125 mobile developers and members of the business community attended. The event was recorded and I'm hoping to get a link to the video.
We've gone through a few interations of alpha releases and are approaching beta status. The app is being used by an increasing number of alpha testers and we are getting really good feedback.
The first alpha version of the Android AppUpdate app is now working and we are starting alpha testing. If you have an android phone, the facebook app installed and are up for some early alpha testing please contact me.
Stacie, of 48thAve Productions, is working on a look for the AppUpdate site and soon you will be able to get all the same social views of apps here that you can on the phone app.
Check back for more updates on our progress.
|
OPCFW_CODE
|
feat(otlp-trace-exporters): Add User-Agent header to OTLP trace exporters
Which problem is this PR solving?
Updates #3291
Short description of the changes
add user agent to otlp grpc trace exporter
add user agent to otlp http/json trace exporter
add user agent to otlp http/proto trace exporter
I tried adding this to the browser exporter but get an error for setting an Unsafe Header.
This should no longer be forbidden but I wasn't sure what I was missing.
Type of change
Please delete options that are not relevant.
[x] New feature (non-breaking change which adds functionality)
How Has This Been Tested?
[x] Unit tests
Checklist:
[x] Followed the style guidelines of this project
[x] Unit tests have been added
[ ] Documentation has been updated
I believe it is better making the implementations in the base packages (otlp-exporter-base, otlp-grpc-exporter-base and otlp-proto-exporter-base), which also work as the base packages of metrics/logs exporters.
This should no longer be forbidden but I wasn't sure what I was missing.
That depends on browser implementation. Chrome Browser currently doesn't support that.
I believe it is better making the implementations in the base packages (otlp-exporter-base, otlp-grpc-exporter-base and otlp-proto-exporter-base), which also work as the base packages of metrics/logs exporters.
I started there, but because of the way the headers are built currently, I need them added in the individual exporters. The headers for each exporter are intended to override the generic variables. For example, the OTEL_EXPORTER_OTLP_TRACES_HEADERS are used over OTEL_EXPORTER_OTLP_HEADERS for traces, so even if I set it in base it will get clobbered by the trace-specific setup. If I set them in the base packages, I'll have to update each of these constructors anyway. I'm open to trying that if others agree that is the best way to do this.
This should no longer be forbidden but I wasn't sure what I was missing.
That depends on browser implementation. Chrome Browser currently doesn't support that. I think we can just ignore the warnings.
I was getting errors when running tests. It's possible the issue is that I tried overwriting user agent instead of appending 🤔 I'm definitely less familiar with the browser implementations.
Error: Uncaught AssertionError: 'Refused to set unsafe header "User-Agent"' === 'Request Timeout' (webpack-internal:///../../../node_modules/assert/assert.js:199)
I believe it is better making the implementations in the base packages (otlp-exporter-base, otlp-grpc-exporter-base and otlp-proto-exporter-base), which also work as the base packages of metrics/logs exporters.
I started there, but because of the way the headers are built currently, I need them added in the individual exporters. The headers for each exporter are intended to override the generic variables. For example, the OTEL_EXPORTER_OTLP_TRACES_HEADERS are used over OTEL_EXPORTER_OTLP_HEADERS for traces, so even if I set it in base it will get clobbered by the trace-specific setup. If I set them in the base packages, I'll have to update each of these constructors anyway. I'm open to trying that if others agree that is the best way to do this.
Let's see what we can do after #3748 being fixed which will change the way headers are merged.
This should no longer be forbidden but I wasn't sure what I was missing.
That depends on browser implementation. Chrome Browser currently doesn't support that. I think we can just ignore the warnings.
I was getting errors when running tests. It's possible the issue is that I tried overwriting user agent instead of appending 🤔 I'm definitely less familiar with the browser implementations.
Error: Uncaught AssertionError: 'Refused to set unsafe header "User-Agent"' === 'Request Timeout' (webpack-internal:///../../../node_modules/assert/assert.js:199)
The test runs a headless chrome browser and would reject if you try to set the user agent on a request. Maybe we should do a try-catch on the send method of otlp-exporter-base.
|
GITHUB_ARCHIVE
|
Error setting up entry PyLoxone for loxone
Hello,
I would like to connect my Lox MiniServer Gen2 to the HA, now I have following Problem. I installed the HA OS and HACS, after this I installed the FileEditor and the PyLoxone SW. I have added the Login to configuration.yaml, after a restart I get the error message below. I have already changed the IP from the HA, the last 24h was the HA disconnected, I tried different logins and Ports. The MiniServer Gen2 has the FW version <IP_ADDRESS>, its the latest version.
I added following code to configuration.yaml:
loxone:
port: 80
host: http://dns.loxonecloud.com/504F********
username: User
password: *********
generate_scenes: false # default is true
generate_scenes_delay: 5
generate_lightcontroller_subcontrols: true
in some issues is something written from a file which named ...token and this one should be saved in the config Folder, but on my HA is no File saved with this or a similar name.
i have tried as well the last four versions from PyLoxone including the newest Beta version.
I can connect to the site with the json file without any problem, the connection from outside is working as well.
thanks
BR
Mario
benutzerdefinierte
Logger: homeassistant.config_entries
Source: custom_components/loxone/api.py:60
Integration: PyLoxone (documentation, issues)
First occurred: 22:32:40 (1 occurrences)
Last logged: 22:32:40
Error setting up entry PyLoxone for loxone
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/h11/_state.py", line 249, in _fire_event_triggered_transitions
new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
KeyError: <class 'h11._events.ConnectionClosed'>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 269, in async_setup
result = await component.async_setup_entry(hass, self) # type: ignore
File "/config/custom_components/loxone/init.py", line 132, in async_setup_entry
if not await miniserver.async_setup():
File "/config/custom_components/loxone/miniserver.py", line 101, in async_setup
request_code = await self.lox_config.getJson()
File "/config/custom_components/loxone/api.py", line 60, in getJson
api_resp = await requests.get(url_api,
File "/usr/local/lib/python3.8/site-packages/requests_async/api.py", line 11, in get
return await request("get", url, params=params, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests_async/api.py", line 6, in request
return await session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests_async/sessions.py", line 79, in request
resp = await self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.8/site-packages/requests_async/sessions.py", line 136, in send
r = await adapter.send(request, **kwargs)
File "/usr/local/lib/python3.8/site-packages/requests_async/adapters.py", line 48, in send
response = await self.pool.request(
File "/usr/local/lib/python3.8/site-packages/http3/interfaces.py", line 49, in request
return await self.send(request, verify=verify, cert=cert, timeout=timeout)
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/connection_pool.py", line 130, in send
raise exc
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/connection_pool.py", line 120, in send
response = await connection.send(
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/connection.py", line 59, in send
response = await self.h11_connection.send(request, timeout=timeout)
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/http11.py", line 58, in send
http_version, status_code, headers = await self._receive_response(timeout)
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/http11.py", line 130, in _receive_response
event = await self._receive_event(timeout)
File "/usr/local/lib/python3.8/site-packages/http3/dispatch/http11.py", line 161, in _receive_event
event = self.h11_state.next_event()
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 443, in next_event
exc._reraise_as_remote_protocol_error()
File "/usr/local/lib/python3.8/site-packages/h11/_util.py", line 76, in _reraise_as_remote_protocol_error
raise self
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 427, in next_event
self._process_event(self.their_role, event)
File "/usr/local/lib/python3.8/site-packages/h11/_connection.py", line 242, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/usr/local/lib/python3.8/site-packages/h11/_state.py", line 238, in process_event
self._fire_event_triggered_transitions(role, event_type)
File "/usr/local/lib/python3.8/site-packages/h11/_state.py", line 251, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.RemoteProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
Can you try the newest version from hacs (0.3.5)? Please do not use the yaml anymore. Configure over the integration menu. It seams that your Pyloxone does not reach the loxone at all. In this case no token file can be created.
Hello, thanks for the answer.
I tried it on this way and it works fine :)
|
GITHUB_ARCHIVE
|
Help identifying this frame?
I was told this was a Cinelli frame although it's clearly not. The fork seems to be original but I'm not sure of what it could be just by looking at its dropouts and lugs. The seat post is 26.8, english threaded and there is a "suntour pro" stamp on the dropouts.
I would seriously be glad if anyone could help!
frame, dropouts, windowed lugs and fork crown
serial number
oh! sorry for the weird rendering on the photos, my iphone has become senile.
Unfortunately, with a track bike there are fewer age-identifying features.
Actually, it doesn't look like a track frame to me. There are cable guides on the top tube, and the rear dropouts are horizontal, forward facing, with a derailer hanger.
Or did you mean the derailers, brakes, and other bits that would help to date the bike that are missing when a bike is setup like a trade bike?
haha not a track bike at all, i just used those single speed components because that's what i had on hand.
I only had access to the frameset, no idea on what other components came originally with this one.
someone pointed towards japanese frame at bikeforums.net, does anyone have a clue?
OK, if we just have a random frame with no components, one can observe that it's lugged, so not likely to have been produced in the past 5-10 years when welding became the norm. And there are no lugs for downtube shifters, meaning it was either set up for stem shifters (prior to roughly 1985) or some sort of handlebar shifter (roughly after 1995). The water bottle bosses on the downtube are placed high, suggesting an older bike, before oversized water bottles became the vogue.
Can't clearly all the cable lugs, but they appear to be the simple style designed for sheathed cables -- usually associated with cheaper bikes, and after maybe 1980. One odd thing is that clamp on the seat tube. It sort of looks like a downtube shifter mount, but normally there would be some sort of "stop" on the downtube (other than the bottle boss) to keep the clamp from slipping, and I cannot see such a thing (but the pictures are poor quality).
The clamp on the seat-tube looks like a pump peg to me!
it is a clamp for a upperend suntour derailleur
I solved the puzzle. It's a Caloi Triathlon. Although Caloi is a brazillian brand, they ordered 200 bikes from Suntour in 1985 for their racing teams. It came originalli with suntour shifting group, sugino cranks and nitto handlebars. It's such a rare ride over here, it's a shame the paint is no longer original and i have no access to a full range of the components, I really like to looks and weight of the frame.
It was only made in this nice blue color.
A better look at the lugs on the top tube
Thanks for the chase, folks!
Another source who might be able to provide some background/back story is the guy who runs Yellow Jersey. Drop him a line, send some pictures, see what he thinks: http://www.yellowjersey.org
The photo quality could be better, but it looks to me like it is a mid-range Japanese-made frame from the mid- to late-1980s. The rear dropouts are not stamped, so it's not a low-end frame, and the single shifter boss on the downtube is for unitized Shimano or Suntour shifters, like you can see on the bikes photographed in this thread.
It's hard to tell for certain just from what I see in the photos, but the lugs appeared to be thinned toward the ends, which is again not something you see on low-end frames.
If you ask this question in the Classic & Vintage sub-forum from which I linked the above thread, you'll likely get a quick and accurate answer. I wish I could tell you more, but I don't recognize the semi-wrapped seat stay lugwork, though I'd wager someone at BikeForums will know much more.
Yeah, the folks in C&V are awesome.
Ah! What appeared to be a second water bottle boss was simply the shadow of the first boss on the wall.
Thanks for all the answers, people! I actually opened this thread in the meantime of opening it also at bike forums, please, take a look!
I think I used to own the same frame. It's (if it's similar to my frame) a Mitsubishi Shogun type frame, although most pictures are not showing with the lugs and same head-tube. Please bear in mind that frames vary between years, not just models.
This is not my bike, but it's the same frame: https://rideblog.files.wordpress.com/2011/01/shogunrestore06.jpg?w=640&h=480
Hope this helps. Not saying it's this frame, but it looks very similar.
I can right away notice a bunch of differences -- more different than similar, I'd say.
Fair enough, just simply my opinion :3 I have always liked to ride lugged frames, as to why the Colnago road bikes seem so nice. I thought that this shogun was quite similar, but perhaps not! heheheh Thanks for the comment!
do you know the serial number pattern for this frame of yours? mine is two letters (S and I) plus six digits.
I physically can't get to it. Heheheh, It's covered by the cable guide on the bottom bracket shell, and is completely caked over with all sorts of gunk. I haven't used the bike in years, it's just sat at the back of my shed, doing pretty much nothing. I shall try to see if I can find the serial number later on when I get back, currently at work eheehheh. I think it's something similar to the pattern you've posted, but I am not promising anything! Hehehehe.
thanks for answering, mate, but i can spot several differences right away. take a look at the semi wrap seat cluster for instance
|
STACK_EXCHANGE
|
how to map rectangular coordinate system onto JavaFX GraphicsContext canvas
I am trying to figure out how to apply the proper sequence of translate and scale commands (or a single .transform command) so that the default pixel grid (0,0,1920,1080) for instance is setup so that the coordinate system (minx,miny,maxx,maxy) can be used where for instance
minx=100 maxx=200 miny=-5 maxy=5
I am trying to render mathematical functions and map the results to pixels and use the mouse to select certain regions to zoom in on.. my current code is at
https://bitbucket.org/stephenc214/fastmath/src/default/src/fastmath/fx/HardyZMap.java
I would really appreciate any help... I know this is simple but I just can't seem to get it to work properly
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.transform.Affine;
import javafx.scene.transform.NonInvertibleTransformException;
double xrange = maxx - minx;
double yrange = maxy - miny;
// width and height are in units of pixels
// gc is the GraphicsContext object from javafx
Affine t = new Affine();
double xratio = xrange / width;
double yratio = yrange / height;
t.appendTranslation( minx, miny );
t.appendScale( xratio, yratio );
try
{
t.invert();
}
catch( NonInvertibleTransformException e )
{
throw new RuntimeException( e.getMessage(), e );
}
gc.setTransform( t );
You can map [x, y] pixel coordinates in an image of size [width, height] to your given range as follows:
x'=minx+(maxx-minx)*(x+0.5)/width;
y'=miny+(maxy-miny)*(y+0.5)/height;
I'm looking for the inverse of that. but cant do it that way.. need to do it as an affine transform. I'm setting up the coordinate system apriori with translate() and scale commands applied to the http://docs.oracle.com/javafx/2/api/javafx/scene/canvas/GraphicsContext.html associated with the Canvas object which takes up the whole window and whose coordinate (0,0) is the upper-left hand of the window... currently im using something like this
gc.restore();
gc.translate( 0, height / 2 );
gc.scale( width / xrange, height / yrange )
gc.translate( -minx, 0 );
Usually you don't want the inverse to map each pixel on the screen exactly once to your function. Here's similar question to yours: http://computergraphics.stackexchange.com/questions/4193/what-is-the-correct-order-of-transformations-scale-rotate-and-translate-and-why/4194#4194
Would have voted for close, but this is a more generic answer.
Thank you. I'm actually doing the opposite of that, I'm looping thru each pixel x,y and mapping that to precisely one point in the complex plane to evaluate something like the Mandelbrot set. Also, the mouse click handler returns point clicked in pixels, and I need to map that onto the complex plane as well. I realize each "pixel" is actually a rectangle on the complex plane but the resolution is small enough that just sampling the center of this rectangle is fine for display purposes.
|
STACK_EXCHANGE
|
#ifndef C2M_STRUCTURE_ELEMENTS_HPP
#define C2M_STRUCTURE_ELEMENTS_HPP
#include <crash2mesh/core/types.hpp>
#include <crash2mesh/io/erfh5/file_contents.hpp>
#include <set>
#include <vector>
namespace c2m
{
/**
* @brief Represents any N-Dimensional atomic element (0D to 3D)
*
*/
class FiniteElement
{
public:
using Ptr = std::shared_ptr<FiniteElement>;
/**
* @brief Get the dimension of derived elements
*
* @return int dimension of derived elements
*/
virtual int dim() = 0;
const erfh5::FEType& type; ///< The type associated with any finite element
const partid_t partID; ///< The part any finite element belongs to (may be ID_NULL)
const entid_t entityID; ///< The unique ID identifying any finite element
// For comparison and ordering (compares ID)
bool operator==(const FiniteElement& other) const;
bool operator!=(const FiniteElement& other) const;
bool operator<(const FiniteElement& other) const;
bool operator>(const FiniteElement& other) const;
protected:
static entid_t maxID; ///< To track the entityID given to any newly created FiniteElement
/**
* @brief Construct a new Finite Element with given type and belonging to given part
*
* @param _type type
* @param _partID id of containing part
*/
FiniteElement(const erfh5::FEType& _type, partid_t _partID);
};
/**
* @brief Represents 0-dimensional points
*
*/
class Node : public FiniteElement
{
public:
using Ptr = std::shared_ptr<Node>;
const static int DIM = 0;
virtual int dim()
{
return DIM;
}
/**
* @brief Construct a new Node given a node identifier and positions
*
* @param _ID node identifier
* @param positions 3D positions
*/
Node(nodeid_t _ID, const MatX3& _positions);
const nodeid_t ID; ///< A node's ID (not the same as FiniteElement::entityID)
MatX3 positions; ///< A node's positions
uint referencingParts; ///< Number of parts referencing this vertex
};
/**
* @brief Represents any higher dimensional atomic elements created from connecting nodes
*
*/
class ConnectedElement : public FiniteElement
{
public:
using Ptr = std::shared_ptr<ConnectedElement>;
virtual int dim() = 0;
std::vector<Node::Ptr> nodes; ///< This elements connected nodes
std::vector<bool> active; ///< Whether the element is active or inactive
protected:
/**
* @brief Construct a new Connected Element given its type, its containing part identifier
* and its connected nodes
*
* @param _type type
* @param _partID part identifier
* @param _nodes connected nodes
*/
ConnectedElement(const erfh5::FEType& _type, partid_t _partID, const std::vector<Node::Ptr>& _nodes, const std::vector<bool>& _active);
};
/**
* @brief Represents 1D connected atomic elements
*
*/
class Element1D : public ConnectedElement
{
public:
using Ptr = std::shared_ptr<Element1D>;
const static int DIM = 1;
virtual int dim()
{
return DIM;
}
const static std::vector<const erfh5::FEType*> allTypes; ///< Contains all valid types for Element1D
const elemid_t elem1dID; ///< Identifier (unique among Element1Ds)
/**
* @brief Construct a new Element1D given its identifier, type, containing part and connected nodes
*
* @param _ID element1D identifier
* @param _type type
* @param _partID containing part identifier
* @param _nodes connected nodes
*/
Element1D(elemid_t _ID, const erfh5::FEType& _type, partid_t _partID, const std::vector<Node::Ptr>& _nodes, const std::vector<bool>& _active);
};
/**
* @brief Represents 2D connected atomic elements
*
*/
class Element2D : public ConnectedElement
{
public:
using Ptr = std::shared_ptr<Element2D>;
const static int DIM = 2;
virtual int dim()
{
return DIM;
}
const static std::vector<const erfh5::FEType*> allTypes; ///< Contains all valid types for Element2D
const elemid_t elem2dID; ///< Identifier (unique among Element2Ds)
const VecX plasticStrains; ///< Plastic strain timeseries
const float plasticStrain0; ///< Initial Plastic strain
/**
* @brief Construct a new Element2D given its identifier, type, containing part, connected nodes
* and plastic strain time series.
*
* @param _ID element2D identifier
* @param _type type
* @param _partID containing part identifier
* @param _nodes connected nodes
* @param _plasticStrain0 initial plastic strain
* @param _plasticStrains plastic strain time series
*/
Element2D(elemid_t _ID,
const erfh5::FEType& _type,
partid_t _partID,
const std::vector<Node::Ptr>& _nodes,
const std::vector<bool>& _active,
float _plasticStrain0,
const VecX& _plasticStrains);
};
/**
* @brief Represents 3D connected atomic elements
*
*/
class Element3D : public ConnectedElement
{
public:
using Ptr = std::shared_ptr<Element3D>;
const static int DIM = 3;
virtual int dim()
{
return DIM;
}
const static std::vector<const erfh5::FEType*> allTypes; ///< Contains all valid types for Element3D
const elemid_t elem3dID; ///< Identifier (unique among Element3Ds)
const VecX ePlasticStrains; ///< equivalent plastic strain timeseries
const float ePlasticStrain0; ///< Initial Plastic strain
/**
* @brief Construct a new Element3D given its identifier, type, containing part, connected nodes
* and equivalent plastic strain time series.
*
* @param _ID element3D identifier
* @param _type type
* @param _partID containing part identifier
* @param _nodes connected nodes
* @param _ePlasticStrain0 initial plastic strain
* @param _ePlasticStrains equivalent plastic strain time series
*/
Element3D(elemid_t _ID,
const erfh5::FEType& _type,
partid_t _partID,
const std::vector<Node::Ptr>& _nodes,
const std::vector<bool>& _active,
float _ePlasticStrain0,
const VecX& _ePlasticStrains);
};
/**
* @brief Represents atomic 2D surface elements extracted from 3D volume elements
*
*/
class SurfaceElement : public Element2D
{
public:
using Ptr = std::shared_ptr<SurfaceElement>;
const static int DIM = 2;
virtual int dim()
{
return DIM;
}
const static std::vector<const erfh5::FEType*> allTypes; ///< Contains all valid types for SurfaceElements
const elemid_t surfaceElemID; ///< Identifier (unique among SurfaceElements)
const Element3D::Ptr volume; ///< The volume belonging to this surface element
/**
* @brief Construct a new SurfaceElement given its identifier, type, containing part, connected nodes
* and the volume element it belongs to.
*
* @param _ID SurfaceElement identifier
* @param _type type
* @param _partID containing part identifier
* @param _nodes connected nodes
* @param _volume volume element this belongs to
*/
SurfaceElement(elemid_t _ID,
const erfh5::FEType& _type,
partid_t _partID,
const std::vector<Node::Ptr>& _nodes,
Element3D::Ptr _volume);
};
} // namespace c2m
#endif
|
STACK_EDU
|
How would I use sed and/or awk/grep to edit the line following a matched string?
I have a large text file that needs some changes. I need to do this by first locating lines that have a particular common string, and then editing the line directly after that. So for example, if I ran this grep command:
# grep -A1 important_string gianttextfile.txt
important_string
change_this
I would want to first locate important_string, and then modify change_this to be something else, several times throughout a document. I cannot just modify all change_this entries because many of them need to stay as they are, it's just the ones following this particular string that I need to change.
What would be the best way to accomplish this?
Should a line just after important_string be tested for important_string: before substitution? after substitution? only if there was no substitution? or never?
@KamilMaciorowski I'm maybe not understanding what you're asking. There's two possible values that can follow important_string. change_this is one, new_value is the other (there's some other junk on that line too but that can stay as it is). What I need is effectively # sed -i 's/change_this/new_value/' file.txt but applied only to the line immediately following each occurrence of important_string.
These are example strings, not your actual strings, right? If the actual strings were foo, ofa and f respectively, and the input was foo, ofafoo, foofa, ofaoo, ofa (5 lines), then ofafoo should be changed to ffoo. But there's also foo there, before and after the replacement. So should the third line be changed? If yes, then foo in the third line will disappear. Should the fourth line be changed because of the disappearing foo in the third? If yes, then foo will appear in the fourth. Should the fifth line be changed because of the appearing foo?
sed '/important_string/ {n;s/change_this/new_value/}'
Notes:
Remember important_string and change_this are parsed as regular expressions.
Any line where s is performed is not tested for important_string, so it cannot trigger s for the next line. This means a snippet like this:
…
foo # this line does not trigger s for the next line
important_string # this line triggers s for the next line
important_string # s is performed here
change_this # s in not performed here
…
will not change. A variant that always tests for important_string is
sed ':start; /important_string/ {n;s/change_this/new_value/;b start}'
If s could make important_string appear or disappear then you may want to test for important_string before substitution and/or after substitution; or not to test if s made a successful substitution. This answer does not cover all these cases.
Use s/…/…/g if needed.
Use sed -i … file if needed.
|
STACK_EXCHANGE
|
Can we add git-annex to GitLab CE?
I know GitLab EE supports git-annex. However, us being a small team (of 2), plus lower cost projects, we're using GitLab CE in self hosted environment.
Now facing problems with big binary files and images, we're starting to look for a solution.
If there is any way git-annex can be manually integrated to gitlab-shell, or anyone attempting this or wanting to attempt, kindly help me out.
It's been 7 years since this question was originally asked, and it is still relevant. From what I've found, it looks like git-annex has been entirely removed from both GitLab CE and EE in favor of git-lfs. To me, using git-lfs is not an option, because that will force you to push the LFS data to every mirror, therefore it will take up the space on every mirror. As an example, if I wanted to have a mirror in GitHub, and I'm using LFS, it would mean that I'd need to push the LFS data to GitHub too, and this restricts me to comply with the LFS limits. In addition to this, git-lfs forces the user to download all the data on pull, and in my case we're talking about lots of data, so downloading all of it on every pull is also not an option. So for my use case, git-lfs is not a proper replacement for git-annex.
There is a pull request for adding git-annex to Gitea but I wasn't able to make it work.
I have only been able to make two self-hosted solution work with git-annex so far: (1) gitolite, which is not particularly easy to setup, and I eventually encountered some git-annex repo corruption problems that I haven't been able to fix, and (2) G-Node Gin which I was able to setup using docker-compose, although I had to deviate slightly (and simplify) the instructions provided in the website. Finally, you can always have a git-bare repo over ssh with annex initialized on it.
Mind sharing a link to your work (modified docker-compose and instructions)? I'm interested in having something like this for my company (where most aren't used to handling multiple git remotes).
The integration is not the problem; storage is. Why would Gitlab store your data for free?
I suggest you guys enable ssh access on your PCs, set each other as remotes, then just git annex copy --from remote-name. It's easier, doesn't rely on third-parties, and it's the way Linus and Joey intended git and git-annex to be used.
The OP clearly asked for help integrating git-annex into their self-hosted environment, no into the GitLab severs. Therefore, based on the original question, Gitlab wouldn't be "storing his data for free", as he would be storing the data on his self-hosted runner.
Thanks for the clarification, Reniel. I had not understood this at the time.
|
STACK_EXCHANGE
|
On Wed, May 10, 2006 at 06:03:25AM -0500, Ben Collins-Sussman wrote:
> On 5/10/06, Malcolm Rowe <email@example.com> wrote:
> >Unless there are two radically different use cases that I've been missing,
> Yes, the two radically different use-cases for dealing with a file
> that is schedule-add-with-history are
> * "Show me exactly what's going to be sent into the repository."
[which we do for repos->wc diffs]
> * "Show me my local edits."
[which we do for wc->wc diffs]
> They are mutually exclusive behaviors, but both are legitimate things
> you may want to do with a schedule-add-with-history file. One
> behavior should be the default, and the other should be toggled by a
> switch. At the moment, the 2nd behavior is all we do.
Unless I'm misunderstanding what you're suggesting with the first item,
we do both. Diffing a 'renamed' file (actually one deleted and one
schedule-add file) will show a complete deletion of OLD plus:
repos->repos diff, repos->BASE diff, 'svnlook diff': a complete add of NEW
wc->wc diff, 'svnlook diff --diff-copy-from': a diff of NEW against OLD
My argument is that 'svn diff' should be doing one or the other by
default, not changing whether we take account of copyfrom information
based on whether we're doing a wc->wc diff or a repos->wc diff.
> You make a good argument that the 1st behavior should be the default,
> for the sake of consistency with other diff commands. If it's
> changed, please at least make the 2nd behavior still triggerable
In an ideal world, I think that, for consistency with GNU diff and
svnlook, and for minimal suprise, 'svn diff' should not take account
of copyfrom information unless a '--diff-copy-from' flag is passed.
We could then extend repos->repos and repos->BASE diffs to accept this
flag as well.
I don't expect this to be entirely uncontroversial, but our current diff
implementation is far from self-consistent - so we're going to have to
break something however we try to fix it (that, or codify our existing
implementation as our ideal design).
Also note that I'm not suggesting any particular behaviour for 'svn diff'
in the face of true renames - that's not something I've spent much time
thinking about yet.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Fri May 12 09:28:46 2006
|
OPCFW_CODE
|
My name is Brent Schooley and I am the Developer Interaction Designer for the NetAdvantage Reporting product. My job is to make sure your experience with the product is positive and productive. It's very important to us that we meet your needs with NetAdvantage Reporting so your feedback is very valuable!
Is there something we're missing? Something that could be made easier/more valuable? Anything that bugs you that you'd like to see done differently? Let me know so that I can help get the changes made so that the product is what you need.
If you have thoughts there are a few way you can let me know:
Let's work together to make this a great product!
Just wanted to comment on a few things that would be nice. And let me preface my suggestions/comments by saying that it's *very* possible that there are ways to do some of the things that I mention and that I just haven't figured out how yet. :)
1. I've got reports where multiple parameters are being used. In order for me to get them to show up in a specific order in the report viewer, I have to preface the Name of the parameter with specific characters. For example, I have a parameter that is the end date for a query. In order to have it show up last in the list of parameters that the user can fill in, I have to name it "yEndDate". So I've got "xStartDate" and "yEndDate" in order to keep the Start Date show up above the End Date in the report viewer.
2. It would be nice to have a DatePicker control in the report viewer for a DateTime parameter as opposed to straight text.
3. I love that the reports are in XAML... but it would be nice to be able to right-click on the igr file and select "View Code" (much like other items in VS2010). Otherwise, how do I get to the XAML?
4. It would be nice to be able to insert a PageBreak after certain controls in a report (<PageBreak/> element in XAML).
5. It would be nice to be able to hide/enable/disable parameters in the report viewer based upon the value of another parameter. For example, I have 3 parameters in a specific report. First is a boolean "View Surveys with any Date". Then the next two are "Start Date" and "End Date". If the user selects "View Surveys with any Date", then I'd like to be able to disable/hide "Start Date" and "End Date".
I think that's it for now. If anyone has ways to do these things, I'd love to know about it! :)
I sent you an email with all the relevant information that I have on this software, but lately a coworker and I have started to really discover some memory leak issues when using the report designer in visual studio. Our machine will run out of memory and visual studio will eventually crash.
We did fix some memory leaks in the designer after CTP2.
I am developing a part of a software which is supposed
to show, print, create, alter and export reports of the data collected
inside the main solution.
The most amazing solution would be, a control for editing reports (igr-files). We have a relativly static set of Data to export, so I could provide the end-user with a fix set of object-data-sources and, with your help, the possibility to edit his report templates; put in his own coorperation logo, color his own background, pick the fields he would like to display from the definition of the data source(and not those we would have to define hardcoded) ... etc. Afterwards the report should be safed and is then usable for anyone to display the report.
I also would like to load report templates(igr-files) from anywhere; resources, relative uri, absolute uri, string, and stream.
I am sure i will come up with more ideas. Those are the ones I would be eager to see being implemented.
Thanks in advanceEnrico
|
OPCFW_CODE
|
// Implementation of View interface
// It uses termbox-go, see https://github.com/nsf/termbox-go
package view
import (
"fmt"
"fracture/algorithm"
"fracture/data"
"termbox-go"
"time"
)
const hsize = 31
const wsize = 81
const lsize = 30
type datas struct {
ox, oy, precision float64
width, height int
}
type TermboxScreen struct {
currentFractal datas
logs []string
stop chan int
nbGen int
coloration Coloration
}
func NewTermboxScreen(width, height int, coloration Coloration) *TermboxScreen {
return &TermboxScreen{datas{0.0, 0.0, 4.0 / float64(wsize), width, height}, make([]string, 0), make(chan int, 1), 0, coloration}
}
func (t *TermboxScreen) Init() {
if err := termbox.Init(); err != nil {
panic(err)
}
t.Log("init interface: done")
}
func (t *TermboxScreen) Close() {
termbox.Close()
}
func (t *TermboxScreen) Log(str string) {
for j := 0; j < hsize; j++ {
for i := 0; i < wsize; i++ {
termbox.SetCell(wsize+1+i, j, ' ', termbox.ColorWhite, termbox.ColorDefault)
}
}
str = fmt.Sprintf("* %s", str)
t.logs = append([]string{str}, t.logs...)
current := hsize - 1
for _, log := range t.logs {
current -= (len(log) / lsize)
for i, c := range log {
line := current + i/lsize
if line >= 0 {
termbox.SetCell(wsize+1+i%30, line, c, termbox.ColorWhite, termbox.ColorDefault)
}
}
current--
}
}
func (t *TermboxScreen) EventLoop(routine algorithm.Routine) {
routine(t, wsize, hsize, t.currentFractal.ox, t.currentFractal.oy, t.currentFractal.precision)
loop:
for {
switch ev := termbox.PollEvent(); ev.Type {
case termbox.EventKey:
compute := false
if ev.Key == termbox.KeyCtrlC {
break loop
} else if ev.Key == termbox.KeyArrowDown {
t.currentFractal.oy += t.currentFractal.precision
compute = true
} else if ev.Key == termbox.KeyArrowUp {
t.currentFractal.oy -= t.currentFractal.precision
compute = true
} else if ev.Key == termbox.KeyArrowRight {
t.currentFractal.ox += t.currentFractal.precision
compute = true
} else if ev.Key == termbox.KeyArrowLeft {
t.currentFractal.ox -= t.currentFractal.precision
compute = true
} else if ev.Key == termbox.KeyCtrlR {
t.currentFractal.precision *= 0.9
compute = true
} else if ev.Key == termbox.KeyCtrlT {
t.currentFractal.precision *= 1.1
compute = true
} else if ev.Key == termbox.KeyCtrlS {
t.nbGen += 1
is := NewImageSaver(t.currentFractal.width, t.currentFractal.height, fmt.Sprintf("test%d.png", t.nbGen))
is.Init()
precision := float64(wsize) * t.currentFractal.precision / float64(t.currentFractal.width)
t.Log(fmt.Sprintf("precision %f", precision))
routine(is, t.currentFractal.width, t.currentFractal.height, t.currentFractal.ox, t.currentFractal.oy, precision)
t.Log("test")
}
if compute {
routine(t, wsize, hsize, t.currentFractal.ox, t.currentFractal.oy, t.currentFractal.precision)
}
}
}
t.stop <- 0
}
func (t *TermboxScreen) Listen(channel chan data.Pair, matrix *data.Matrix) {
point, ok := <-channel
for ok {
val := matrix.At(point.First, point.Second)
c, _ := t.coloration(val).(rune)
termbox.SetCell(point.First, point.Second, c, termbox.ColorWhite, termbox.ColorDefault)
point, ok = <-channel
}
}
func (t *TermboxScreen) FlushLoop() {
loop:
for {
select {
case <-t.stop:
break loop
default:
time.Sleep(1000000)
t.drawInterface()
termbox.Flush()
}
}
}
func (t *TermboxScreen) drawInterface() {
for j := 0; j < hsize; j++ {
termbox.SetCell(wsize, j, ' ', termbox.ColorWhite, termbox.ColorWhite)
}
for i := 0; i < wsize+lsize; i++ {
termbox.SetCell(i, hsize, ' ', termbox.ColorWhite, termbox.ColorWhite)
}
test := fmt.Sprintf("x=%f y=%f precision=%f", t.currentFractal.ox, t.currentFractal.oy, t.currentFractal.precision)
for i, c := range test {
termbox.SetCell(i, hsize, c, termbox.ColorBlack, termbox.ColorWhite)
}
termbox.SetCell(wsize/2, hsize/2, ' ', termbox.ColorBlack, termbox.ColorWhite)
}
|
STACK_EDU
|
package com.revolut.butter;
import java.util.AbstractMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.stream.Collector;
import static java.util.stream.Collectors.toMap;
public interface MapStreamUtil {
static <K, V> Collector<Entry<K, V>, ?, Map<K, V>> entriesToMap() {
return toMap(Entry::getKey, Entry::getValue);
}
static <K, V1, V2> Collector<Entry<K, V1>, ?, Map<K, V2>> entriesToMap(Function<V1, V2> valueMapper) {
return toMap(Entry::getKey, entry -> valueMapper.apply(entry.getValue()));
}
static <K, V> Predicate<? super Entry<K, V>> ifEntryKey(Predicate<? super K> keyPredicate) {
return entry -> keyPredicate.test(entry.getKey());
}
static <K, V> Predicate<? super Entry<K, V>> ifEntryValue(Predicate<? super V> valuePredicate) {
return entry -> valuePredicate.test(entry.getValue());
}
static <K1, K2, V1, V2> Function<Entry<K1, V1>, Entry<K2, V2>> transformEntry(Function<K1, K2> keyMapper,
Function<V1, V2> valueMapper) {
return entry -> new AbstractMap.SimpleImmutableEntry<>(
keyMapper.apply(entry.getKey()),
valueMapper.apply(entry.getValue()));
}
}
|
STACK_EDU
|
how can you replace characters in a string under the condition re.IGNORECASE
sentence = 'this is a book.pdf'
sentence.replace( 'pdf' or 'PDF' ,'csv' )
sentence.replace('pdf','csv',re.IGNORECASE)
how can i replace the characters under the condition
specified such as Pdf or PDF
or Ignoring cases all together
how about using lower() for the sentence i.e. sentence.lower().replace(...) ?
you should explicitly state that case sensitvity is not actually important, and your actual goal is replacing any file extension with .csv, which is another question with a very large amount of duplicates. This is a case of XY Problem along with an an ambiguously stated goal that does not and will not match with its answers nor keywords. This would no doubt pollute search results.
Looks as if you want to truncate any file extension found and add .csv. I would recommend using \w{1,5} (one to five word chars) instead of \w+ (one or more), because of the case of files named an12n512n5125.1125n125n125 which I've had in my own file blobs often.
Match period followed by one or more alphanumeric characters at the end of string ($) and replace with .csv. Case sensitivity no longer matters:
import re
sentence = 'this is a book.pdf'
ext2 = 'csv'
sentence = re.sub(rf'\.\w+$', f'.{ext2}', sentence)
slice end of string, lowercase compare it to .pdf, and replace .pdf with .csv. Using string interpolation (f"") for customizable extensions
sentence = 'this is a book.pdf'
ext1 = 'pdf'
ext2 = 'csv'
sentence = sentence[:-4]+f'.{ext2}' if sentence[-4:].lower()==f'.{ext1}' else sentence
Using regex with $ to match end of string with re.IGNORECASE. Using string interpolation for customizable extensions
import re
sentence = 'this is a book.pdf'
ext1 = 'pdf'
ext2 = 'csv'
sentence = re.sub(rf'\.{ext1}$', f'.{ext2}', sentence, flags=re.IGNORECASE)
The initial solution is not good practice as it assumes you know were .pdf is located, but the latter is more plausible
I don't understand that assertion; is the use case not for file extensions? file extensions must be at the end of the string, so you would want to explicitly only allow it at the end of a string, which is the reasoning for using [:-4]. Unless you want to allow .pdf anywhere in the string, which in my own usage is actually a source of false positives. such as (a real example) PDFEscape.exe => csvEscape.exe would cause issues. If for the case of something like: file.pdf.zip is also needed, a slightly more complicataed regex to account for that would be needed
If this is a simple batch script where all your file names are known to be within the domain that .lower works, there's no reason not to just use .lower. It really has nothing to do with "best practice" rather than, if you're just creating a .bat/.sh script or something similar for a restricted use case, cutting this corner doesn't matter.
oh yes,I get what you mean now
If you are doing this for multiple kinds of files then you can find the index of the period(.), delete everything after it and add the file extension to the end
sentence = sentence - sentence[sentence.index(".")+1:]
sentence += "csv"
what if you have multiple dots in between is there a way of specifying the last dot
use .rindex to get last index of
I’m going to assume you are doing this to a string
sentence = sentence.lower()
Better yet just sentence.lower() where you are using sentence next could do the trick hard to say without more context.
authough not best practice , it definitely shortes the code and works well
Agreed with more context about what it is used for a better example could be given
@DanielButler I think it's fair to say that lowercasing the entire filename is in the majority of use cases not the intended result and could cause mangling of filenames. The only valid cases are if you actually need it in lowercase or you are using it only as an intermediate checking string (whereas OP is assigning it to a variable "sentence" which implies it will be reused in a language context where case sensitvity is probably explicitly needed)
|
STACK_EXCHANGE
|
Initial draft of the ScopeManager proposal.
Scope Manager
Current State: Draft
Author: carlosalberto
In the OpenTracing specification, under the "Optional API Elements" section, it is mentioned languages may choose to provide utilities to pass an active Span around a single process.
Upon many iterations and feedback from several contributors, the Java 0.31 API defined the new Scope Manager concept, which is a simple and explicit way to manage the active Span at a given call-context point. This document intends to standardize this concept, so other supported languages and platforms can leverage it, with their respective semantical differences.
Technical background
For any thread, at most one Span may be "active". Of course, there may be many other Spans involved with the thread which are (a) started, (b) not finished, and yet (c) not "active": perhaps they are waiting for I/O, blocked on a child Span, or otherwise off of the critical path.
For platforms where the call-context is propagated down the execution chain -such as Go-, such context can be used to store the active Span at all times.
For platforms not propagating the call-context, it's inconvenient to pass the active Span from function to function manually, so OpenTracing should require, for those platforms, that Tracer contains a Scope Manager that grants access to the active Span through a container, called Scope (using some call-context storage, such as thread-local or coroutine-local).
Specification Changes
New ScopeManager and Scope interfaces are added to the specification, and the Tracer interface is extended to support creation of Spans that are automatically set as the active one for the current context.
ScopeManager
The ScopeManager interface allows setting the active Span in a call-context storage section, and has the following members:
activate, capability to set the specified Span as the active one for the current call-context, returning a Scope containing it. A required boolean parameter finish span on close will mark whether the returned Scope should, upon deactivation, finish the contained Span.
active, the Scope containing the current active Span if any, or else null/nothing.
Scope
The Scope interface acts as a container of the active Span for the current-call context, and has the following members:
span, the contained active Span for this call-context. It will never be null/nothing.
close, marking the end of the active period for the current Span, and optionally finishing it. Calling it more than once leads to undefined behavior.
If the language supports some kind of auto finishing statement (such as try for Java, or with for Python), Scope should adhere to such convention. Additionally, Scope is not guaranteed to be thread-safe.
Tracer changes
The Tracer interface will be extended with:
scope manager, the ScopeManager tracking the active Span for this instance.
start active span, a new behavior for starting a new Span, which will be automatically marked as active for the current call-context. It will return a Scope. A parameter finish span on close will mark whether the Scope should, upon deactivation, finish the contained Span. A default value for this parameter may be provided, depending on the suitability for the language and its use cases.
Both start span and start active span will implicitly use any active Span as the parent for newly created Spans (with a ChildOf relationship), unless the parent is explicitly specified, or (the new) ignore active span parameter is specified (in which case the resulting Span will have no parent at all).
Use Cases
Single-threaded operations.
The active Span will be accessible at all times through the Tracer, without any need to pass the Span around when creating children.
Multi-threaded operations.
The active Span will be accessible at all times, probably using thread-local storage, and it will be possible to pass Span instances between threads and manually manage its active period, and have full, manual control on when it should be finished.
Risk Assessment
The following risks have been identified:
This change will mean API breakage, as both start span and start active span behaviors of Tracer will implicitly use any active Span as the parent for newly created Spans (previously, if no parent was specified, all created Spans were parentless).
For platforms having a prior version with a different in-context propagation (as it happened to the 0.30 Java version, which included a concept called ActiveSpanSource, which used reference-count to handle the lifetime of Spans), either a full migration will be needed, or at least a shim/bridge layer should be provided to ease with the proposed changes.
There can be languages and platforms for which, given their specific threading or memory models, implementing this changes will either be not sufficient or redundant.
CC @palazzem @cwe1ss as contributors who have been working on ScopeManager implementations for other languages ;)
Updated the RFC to reflect (as per @cwe1ss feedback), finish span on close mention for both start active span and ScopeManager's activate.
@jpkrohling Hey, updated the draft to reflect the Go case - decided to leave (for now) the thread and the reactive parts out as they could be more of implementation detail (i.e. Java works 'just fine' using the ScopeManager concept, but some of their frameworks may find this insufficient, etc).
Hope that makes things clear for the remaining of the process, anyway ;)
I'm not sure we can say that Go has "call-context propagated down the execution chain". What I meant is that it's common in Go to pass down a context object[1], and such context object would contain a span context[2]. If that's indeed what you mean, I'm +1 to the changes.
1 - https://golang.org/pkg/context/
2 - https://github.com/opentracing/opentracing-go#creating-a-span-given-an-existing-go-contextcontext
Hey @jpkrohling thanks again for the feedback ;)
So yes, I had briefly consulted the documentation for Go and how they use Context to store the active Span - so my question is more about how you think I should write this part, i.e. not call-context propagated down the execution chain (to me, as a non-Go developer, it looks like the context is propagated down the chain, even if explicitly).
Let me know - the clearer we can define this part, the better ;)
I think something similar to my initial proposal would describe the scenario in a way that is independent of the language:
For applications where the whole unit of work happens in a single thread, it's more convenient to store the active span in a thread-local variable than to pass the span context to all involved functions
Otherwise, your version would also work. Perhaps it would make sense to just add the word "explicitly" here:
For platforms where the call-context is explicitly propagated down the execution chain...
any implement suggest in cpp?
|
GITHUB_ARCHIVE
|
It’s frustrating chatting with Andrew Phillips – for all the right reasons. Once you get your head around the work he is doing, you just want to know more. To me at least, it’s some of the most interesting work that is happening at Microsoft.
I met Andrew a few years back when I was still working at Microsoft in the UK and had the chance to visit our Microsoft Research lab in Cambridge, England, for the day. Andrew was one of a number of people who presented on that day and his work was breathtaking – I remember chatting with a group of European MP’s who were also there and the resounding feeling was that more people should know that Microsoft does this kind of work. Well, it’s taken 2 years but I’ve finally gotten around to it – and thanks to the team in Cambridge I’m helped by a video with Andrew doing a far better job of explaining his work than I can. I’ll give it a shot though.
Andrew is the head of biological computation at the lab and received his Masters degree at nearby Cambridge University. His background is in theoretical computer science, with a focus on programming language development and his relationship with Microsoft began while studying for his PhD at London’s Imperial College. There he met Luca Cardelli, a Microsoft researcher, who was working on ambient calculus and using it to describe and theorize about concurrent computer systems, such as the Internet, and also biological systems, such as cells and viruses. As a visiting professor at Imperial, he and Andrew discussed the possibility of an internship at the Cambridge lab. Andrew’s internship application was successful and he began working with Luca on simulation algorithms for stochastic Pi calculus, a programming language for concurrent systems. His internship went so well that he stayed on as a post-doctoral researcher with a focus on developing stochastic Pi for biological modeling. At this point you’re probably thinking the same as me – why is biological modeling of interest to Microsoft?
It turns out that there are lots of similarities between modeling concurrent systems and biological systems. Just like a computer, biological systems perform information processing, which determines how they grow, reproduce and survive in a hostile environment. Understanding this biological information processing is key to our understanding of life itself. It’s probably easier to understand some of the output of this work – specifically the Stochastic Pi Machine, or SPiM as it’s often referred to. SPiM is a programming language for designing and simulating models of biological processes. The language features a simple, graphical notation for modeling a range of biological systems – meaning a biologist does not have to write code to create a model, they just draw pictures. You can think of SPiM as a visual programming language for biology. In addition, SPiM can be used to model large systems incrementally, by directly composing simpler models of subsystems. Historically, the field of biology has struggled with systems so complex they become unwieldy to analyze. The modular approach that is often used in computer programming is directly applicable to this challenge.
So where is all this taking us, I asked Andrew, and why is Microsoft involved in this field?
“Understanding biological systems is too complex a challenge to leave to trial and error,” Andrew said.
In doing so he acknowledged that much biological research to date has relied on laboratory-based testing. Biological programming languages provide scientists with the means to model biological systems, such as parts of our immune system, and then understand how they react to new types of viruses or new forms of treatment – entirely on a computer. Andrew is in fact developing a whole suite of biological modeling languages, not only for modeling complex systems such as the Immune system, but also for programming molecular computers made of DNA, and programming groups of cells to communicate with each other to perform complex functions. “The potential is tremendous,” Andrew said, “and software holds the key.” And that last statement is the key to much of the work of Microsoft Research. Software has the potential to help us understand and address some of the biggest challenges in society and understanding biological systems holds the key to some real breakthroughs. The real impact of this work hits home in the video when Jim Haseloff from Cambridge University says:
“all of the technologies we need to feed ourselves, to clothe ourselves, to provide materials for the modern world derive from nonrenewable sources and we need to move towards renewable sources and use sustainable technologies…largely they’re biologically based, so the ability to program biological systems is hugely valuable in that endeavor.”
As Andrew said, the potential is enormous Andrew explains that this work has wide-ranging potential not only in helping to understand disease, but in developing our ability to engineer more efficient ways of harnessing the sun’s energy for food production and in our our ability to transform carbon dioxide and other carbon sources into biofuels or electricity. He quoted the American physicist Richard Feynman in saying:
“what I cannot create, I do not understand.”
A Pharma 2020 Executive Summary published a couple of years ago by PWC states: “we anticipate that, by 2020, virtual cells, organs and animals will be widely employed in pharmaceutical research.” Andrew explains that the ability to understand how cells work lies at the heart of our ability to understand disease. If we can understand and then reprogram how the cell works, we could in principle reprogram the cells of our immune system to fight disease better.
I find this stuff hard to get my head around at times but in talking with Andrew a few things became clearer; Microsoft Research really is home to some of the smartest people on the planet and they’re tackling some of the biggest challenges faced by society. And software, allied to these amazing minds, really does hold the key to enormous breakthroughs. I feel good to be a part of that world. As I finished up chatting with Andrew, he left me with one final question that opened my eyes to an entirely new world…
“Could the software industry of programming cells one day rival that of programming silicon?”
I’ll come back to that one when my brain stops hurting!
sidenote: Project Tuva from Microsoft Research is a fascinating look at the world of science through the lens of Richard Feynman and his lectures.
|
OPCFW_CODE
|
I have installed a new System with the actual Image and restore the Backup of the old System.
After restart the System is not working as expected.
Root console works fine.
Do i have to do any additional configuration?
It is impossible to tell what went wrong based on your post. Please be more specific.
I check the system and get access to the web page.
DHCP service is not running!
I installed the actual " IPFire 2.25 (x86_64) - Core Update 146" and the
backup was from a 32 bit machine (core 145).
DHCP Logs did not show any entry in the admin web page.
After I entered
the service started and was shown green in the status line.
After the next reboot of the system, the service is not starting again.
I run the command from https://wiki.ipfire.org/installation/arch-change
to get the graphs correct.
I will continue tomorrow with testing
Have you checked that GREEN is set to DHCP ? This setting is in both GUI and /var/ipfire/dhcp. The latter should contain an empty file “enable_green”, plus the parameters in file “settings” need to be compatible with those in GUI.
If that does not correct the problem you might do better to:
- get core 145 x86_64 from archives and install
- restore backup from core 145 i586
- run arch-change commands
- then upgrade to core 146
the complete installation (includes fhem) is now finished and my system is running now with the latest version on 64 bit.
The Problem with the not running DHCP service was easy to solve, if you know what to do.
Just deactivate the DHCP for all network card and activate the DHCP again.
After this, the service was running well after reboot.
All right, marking this as “solved” then…
This issue has been written up many times in the community, and what most folks who are panicking don’t take the time to check, is a search for failed restores. Since this seems to happen more often when the user’s 32-bit installation root folder has run out of space, it’s usually performed on a new pc, or second pc, while the first one is still running. Either way, the process is error-prone and if I were to ask Peter for an enhancement, it would either be to add some logic to the 64 restore to check if the backup was coming from 32-bit and then apply some “clean-up”, or to flag the user with a URL that documents some follow-up steps that are needed.
In addition to your DHCP service not running, you’ll also find that your performance graphs will be broken as well. Yep, this is also been discussed in the blogs, so a search will help you out there without the need for a new thread.
From my perspective (as you might guess that I’ve been through this) the backup contains a bit more than is really needed. It’s like it had great intentions to be almost “clone-like” for a failed system, but over time, some of the items just don’t seem to make sense to backup. I think if the critical items that no one wants to re-enter are saved, with everything that’s needed to make a replacement completely functional again (yes, even with different NICs - and some detection when there are difference NICs to prompt the user for which to use red/green/etc. ) it would prevent such issues as this.
As for going from 32-bit to 64-bit, I can’t voice any opionion there. Frankly I was AMAZED that it mostly worked! So cudos to the devs who were part of that, and thanks to the community for providing notes on how to get that last 10% working when you do need such a restore.
one trap I was falling into, was the restore of the addon.
I thought, that when I restore the add ons, it will also install the add on .
But you have to install the add on first and restore then the setting.
|
OPCFW_CODE
|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Thu, 30 Oct 2003 15:18:15 +1300
Greetings, I am a health information systems developer, in Waitakere, New
Zealand. I have been developing software in console-based apps, Windows
apps, and IIS intranet solutions, for some time now. The technology offered
by IIS, ASP, VB etc has become a little, err, boring, and rather than
learning .NET I decided to investigate dotGNU (hey Im not buying visual
studio .NET for home use, so if Im going to be working with a free SDK from
command-line, why not do it in Linux and at least enjoy the experience).
Anyways, I have installed, built & configured dgee etc from cvs, got it to the
point where most of it seems to be running - at least I can get the DGEE
examples up in apache.
When I start DGEE it fails to start DGpnetVM, which does not seem to be able
to locate libgc.so.1 according to the log. This is located in
/usr/local/lib/pnet/ but that does not seem to be included in the
environment vars set up for DGEE, tho /usr/local/lib/dgee is. Should I add
the pnet lib to one of the DGEE environment vars ?
Also when I ran make in the DGEE examples dir it gave what looked like serious
errors, but created the dll & dgmx files anyway. Is that normal, or an
indication I have something configured wrong ? see below...
/usr/local/bin/cscc -Wall -g -shared -o wstestClient.dll wstestClient.cs
-L../cslib/System/Web/Services -L../cslib/System/Web -L../cslib/DotGNU/XmlRpc
-lSystem.Web -lSystem.Web.Services -lDotGNU.XmlRpc
wstestClient.cs:35: invalid type specification `XmlRpcClientProtocol'
wstestClient.cs:44: invalid type specification `XmlRpcClientProtocol'
wstestClient.cs:53: invalid type specification `XmlRpcClientProtocol'
wstestClient.cs:32: invalid type specification `(null)'
wstestClient.cs:32: `int' does not inherit from `System.Attribute'
wstestClient.cs:41: invalid type specification `(null)'
wstestClient.cs:41: `int' does not inherit from `System.Attribute'
wstestClient.cs:50: invalid type specification `(null)'
wstestClient.cs:50: `int' does not inherit from `System.Attribute'
make: *** [wstestClient.dll] Error 1
I am not sure where to start with all this, I am probably most interested in
putting together some web services to skill up on C#, but intelligent client
UI is the area I am most interested in. At least I have a vague idea what Im
Where to now ? Is there much documentation and/or examples of web services to
work thru ? What areas of dotGNU should I be looking at for testing etc.
Health information systems developer
- [DotGNU]DGpnetVM environment,
Les Ferguson <=
|
OPCFW_CODE
|
awk command in slide2mp4.sh does not work with gawk-5.1.0
Description
Running the sample does not work on Fedora 33 as:
$ ../slide2mp4.sh test-slides.pdf test-slides.txt test-lexicon.pls test-output.mp4
Format checking of input files is completed.
...
list.txt: No such file or directory
The root cause is that the awk in slide2mp4.sh:
https://github.com/h-kojima/slide2mp4/blob/e654143cd1aeea80150f16b4f2461dc300f63d38/slide2mp4.sh#L90
It does not work with gawk-5.1.0 which is installed by Fedora 23 RPM package, so script cannot complete.
It needed awk '/<\?xml/,/<\/speak>/' instead of awk '/\<\?xml/,/\<\/speak\>/' on my Fedora machine.
Step to reproduce
The following command does not work:
$ cat test-slides.txt |awk '/\<\?xml/,/\<\/speak\>/'
(not output)
While, this awk command works.
$ cat test-slides.txt |awk '/<\?xml/,/<\/speak>/'
<?xml version="1.0" encoding="UTF-8"?>
<speak version="1.1">
<prosody rate="110%">
これはタイトルスライドであり、
これから、サンプルスライドをご紹介します。
OpenShiftとVirtualizationの読み上げテストもします。
</prosody>
... snp ..
My awk version is below.
$ rpm -qf `which awk`
gawk-5.1.0-2.fc33.x86_64
Additional info (workaround on Fedora 33)
So gawk-5.1.0 needed the following patch but I guess it will not work on Mac's awk?
diff --git a/slide2mp4.sh b/slide2mp4.sh
index 8a9c5a5..ea0a4ee 100755
--- a/slide2mp4.sh
+++ b/slide2mp4.sh
@@ -87,7 +87,8 @@ echo "Format checking of input files is completed."
mkdir -p json mp3 mp4 png srt xml
-cat $TXT_FILE |awk '/\<\?xml/,/\<\/speak\>/' > tmp.txt
+cat $TXT_FILE |awk '/<\?xml/,/<\/speak>/' > tmp.txt
+
cat << EOF > txt2xml.py
#!/usr/bin/python3
# Usage: python3 txt2xml.py xml_txt
I've confirmed that I can reproduce this issue by using gawk on CentOS Stream release 8.
The gawk version is below.
$ cat /etc/redhat-release
CentOS Stream release 8
$ rpm -qf `which awk`
gawk-4.2.1-2.el8.x86_64
I've applied the following patch to slide2mp4.sh.
https://github.com/h-kojima/slide2mp4/commit/a66ea15363d123307a1d898c5527735d16dcabdd
And, I've confirmed that it still works on macos Big Sur after applying this patch.
The 'awk' version on macos is below.
$ awk --version
awk version 20200816
Please try to see if slide2mp4.sh works on Fedora.
Thank you! It worked :+1:
|
GITHUB_ARCHIVE
|
Explanation on "Tensorflow AutoGraph support print and assert"
Background
In AutoGraph converts Python into TensorFlow graphs it says:
We (AutoGraph) also support constructs like break, continue, and even print and assert. When converted, this snippet’s Python assert converts to a graph that uses the appropriate tf.Assert.
However, Introduction to graphs and tf.function says:
To explain, the print statement is executed when Function runs the original code in order to create the graph in a process known as "tracing" (refer to the Tracing section of the tf.function guide. Tracing captures the TensorFlow operations into a graph, and print is not captured in the graph. That graph is then executed for all three calls without ever running the Python code again.
Question
The first documents gives the impression that I can use print and TensorFlow AutoGraph can convert into Tensorflow operations. However apparently it is not the case as in the second document.
Please help understand if the sentence in the first document stating "We/AutoGraph support even print and assert" is still correct or if I misunderstand something.
In my understanding, AutoGraph is the one being used under @tf.function and tf.py_function.
The documentation is correct, Python print() is never converted to tf.print() during tracing. This is, for one, important for debugging and diagnostics: the majority of… eh, well, unintended consequences happen during tracing. print() may be liberally used in the body of the function, and has its effect only during the tracing phase.
The reason for this discrepancy is that the first example, a blog post, is six years older than the question, and apparently refers to TensorFlow 1.x. AFAIK, the decorator @autograph.convert was experimental in TF1, and had not made it into TF2; @tf.function subsumes the behavior. But generally, between a blog post and the documentation of any feature, the documentation is more authoritative.
In addition to the Graph overview that you've linked, there is a more detailed guide, Better performance with tf.function. (I'm not sure what the note about a “very different view of graphs[...] for those [...] who are only familiar with TensorFlow 1.x” in the Graph article is really about.) The tf.function documentation is also a must-read, not the least for the links to tutorials and guides it contains.
In my understanding, AutoGraph is the one being used under @tf.function…
That's correct. A function is converted into a GenericFunction object, which then can be instantiated into a few ConcreteFunctions, with different tensor shapes or data types, without re-tracing. However, a function may be re-traced if the reification process cannot conclusively prove that Python variables have not changed, when compared to all existing cached GenericFunction traces of the same Python function. AutoGraph proper is more involved with the second pass, reifying and optimising the ConcreteFunction, which (in addition to, basically, metadata) contains a Graph object, which can be placed and run on a device, or saved into a completely portable model (with some limitations, mainly concerning the TPU devices). The first pass primarily creates Python code rigged to call AutoGraph during the second phase.
…and tf.py_function.
@tf.py_function is a pragma directive for Autograph that tells it to create a graph op that calls back into Python from the graph. It has no effect in eager mode. (Needless to say, this makes the whole model less portable and unable to run without the full Python runtime, in addition to slowdown and synchronisation.) This is different from @tf.function, which declares that the function is intended to be transformed by Autograph, and causes the tracing of the function when it's called, if all conditions, documented in the tf.executing_eagerly article are met. They are both declarative, but their declarations are intended for different moving parts of the framework.
|
STACK_EXCHANGE
|
Bitcoin client mac
By default Bitcoin will put its data here: ~/Library/Application Support/Bitcoin/ Directory Contents Files. Summary. Verify release signatures Download torrent Source code Show version history Bitcoin Core Release Signing Keys v0.8.6 - 0.9.2.1 v0.9.3 - 0.10.2 v0.11.0+. A fully functioning node must have the Bitcoin Core (formerly Bitcoin-Qt) daemon running on a machine instance with the complete bitcoin client mac block chain downloaded 4. It is a full node client meaning, one needs to how much do bitcoin miners make download the whole Bitcoin blockchain to send/receive a transaction which is a memory intensive process. NiftyHash is the Mac Client for Mr. Platforms: Mac OS, Linux, and Windows. Bitcoin Core is the official desktop Bitcoin wallet developed by Bitcoin core developers. Electrum Bitcoin Wallet. Best Bitcoin mining software CGminer. All of the wallets I’ve covered so far are known as SPV wallets or lite wallets. Platforms: Windows, Mac, Linux Going strong for many years, CGminer is still one of the most popular GPU/FPGA/ASIC mining software available.
Bitcoin Core Bitcoin Core is what caused the bitcoin spike a full Bitcoin client and builds the backbone of the network. 2FA is conceptually similar to a security token device that banks in some countries require for online banking.. It’s also cross platform, meaning you can use it with Windows. It offers high levels of security, privacy, and stability. Supported Platforms: MacOS, Linux. bitcoin client mac An overview of these is in files.md in the Bitcoin Core.Available for iOS, Android, Mac, Windows, and Linux. Supports Bitcoin Cash (BCH) and Bitcoin (BTC) 3.
This means that they don’t have a full copy of the blockchain in order to verify transactions – they rely on other computers on the network to give them transaction information Bitcoin Core is a full node Bitcoin wallet Desktop Wallets – The most popular Bitcoin wallets for desktop. Starting from a scratch build on M1 first. In response bitcoin client mac to node software provider, Umbrel, Dorsey said he mig ht set up a separate node running on Raspberry Pi. Pros: Supports GPU/FPGA/ASIC mining, Popular (frequently updated). CGminer is a command line application written in C. Now that we understand mining software and how it helps in the mining process, and you have your Bitcoin wallet and address, let’s look at different software on different operating systems. OS X will open a Finder window for you to drag Bitcoin Core to your Applications folder. Cons: Textual interface.
Bitcoin Core – A full Bitcoin node. Bitcoin Core is a Bitcoin full node wallet, meaning it downloads the entire Bitcoin blockchain. NiftyHash can be downloaded at this link: Mac NiftyHash 1.0b3 Bitcoin Full Node on Apple Mac. Bitcoin Core GUI. Impressum This website is hosted by Electrum Technologies GmbH Electrum Technologies was founded by Thomas Voegtlin in 2013. For now, he is running it on Apple. The first 'factor' is your password for your wallet. The second 'factor' is a verification code retrieved via text message or from an app on a mobile bitcoin client mac device.
Replace 'bitcoin-0.21.0-win64-setup.exe' with the name of the file you actually downloaded. NiceHash. There aren't a lot of Bitcoin mining clients for the Mac and if you uncomfortable with the command line/Terminal (or would just like a little more feedback) I recommend a free OS X mining client. The first time running Bitcoin Core, Max OS X will ask you to confirm that you want to run it: You will be prompted to choose a directory to store the Bitcoin block chain and your wallet Download the Bitcoin Wallet by Bitcoin.com. A simple, secure way to send and receive Bitcoin. bitcoin client mac reskinning or development of a client with extra features), is there somewhere they could download the existing Xcode pro. Bitcoin Core is a community-driven free software project, released under the MIT license. You also need a reliable internet connection, as well plenty of badwidth and hard drive space. If someone wants to mod the Mac Bitcoin client for their own purposes (e.g.
However, it has fewer features and it takes a lot of space and memory Install bitcoin core on Mac Install Bitcoin Core on Mac OS X applicable to v 0.10.0+ Bitcoin Core is both a daemon also a Wallet client. Two-factor authentication (2FA) is a way to add additional security to your wallet. If that's not it, you can do a search like this: find / -name wallet.dat bitcoin client mac -print 2>/dev/null Mac. Bitcoin client app that allows you to download a copy of the blockchain and validate transactions from your Mac, for optimal decentralized security What's new in Bitcoin Core 0.21.0: Consensus:. Its mission is to develop, package and distribute Electrum software, and to provide services to Bitcoin users and businesses ~/.bitcoin/ You need to do a "ls -a" to see directories that start with a dot. certUtil -hashfile bitcoin-0.21.0-win64-setup.exe SHA256 Ensure that the checksum produced by the command above matches one of the checksums listed in the checksums file you downloaded earlier Bitcoin Core Get BTC Core. Buying Bitcoin is There aren't a lot of Bitcoin mining clients for the Mac and if you uncomfortable with the command line/Terminal (or would just like a little more feedback) I recommend a free OS X mining client.
Dorsey set up a Bitcoin node on a Mac computer, which uses a high-performance ARM-based M1 chip. The core acts as a full node as well as provides bitcoin client mac wallet functionality for managing Bitcoin Local Crypto is for viewing the earnings rates and balances in other crypto-currencies besides Bitcoin. Lastly: Disclaimer: Mining Bitcoin is NOT the best way to get bitcoins. It is the most private Bitcoin wallet although it takes patience and quite some time to setup.
|
OPCFW_CODE
|
Hi, I’d like to make sure of my understanding about the max number of warps and warp size per SM. Sorry for this naive question. For Pascal architecture (cc 6.1), according to Table 15 in CUDA C++ Programming Guide, each SM has maximum 64 resident warps and 32 threads per warp so that the max number of resident threads is 2048 per SM.
From the architectural perspective, what architectural feature makes the limit in the max number of resident warps and warp size in max per SM? For example, why can’t each SM have 128 resident warps and 16 threads per warp instead? or 32 resident warps and 64 threads?
Thanks for your time.
A warp size of 32 threads has been a hardware constant for all Nvidia GPUs from CC 1.0 to the present CC 9.0. While there is nothing to stop you coding in such a way as to only utilize 16 threads per warp, you will be wasting 50% of the hardware, as the scheduler issues instructions in terms of warps - 32 threads.
As to the limit of why the maximum of 64 warps for CC 6.1, this is probably a hardware tradeoff based around the amount of resources required for the scheduler to juggle warps. Once the currently executing warp stalls, (waiting for a memory request, waiting to run on a particular under pressure functional unit etc), the scheduler parks this warp and runs the next one available that’s ready to run.
In processor design flexibility leads to complexity. The guiding principle of GPU design is minimizing the complexity of handling control flow and to a lesser extent, data access. This saves square millimeters on the die that can then be used for (1) more execution units, (2) a larger or smarter on-chip memory hierarchy, roughly in that order. For workloads that can benefit from massive parallelism, GPUs owe their performance advantage vs CPUs to focusing on these two aspects.
Note that the die sizes of the highest-performing CPUs and GPUs are close to the limit of what is manufacturable (currently around 850 square millimeters; a Xeon Platinum 9200 die is ~700mm2, a H100 die is ~ 810 mm2), so design trade-offs have to be made. One cannot “have it all”. Since larger die size translates to larger cost, these trade-offs similarly apply to lower-cost, lower-performing variants at various price points.
This leads to divergent design philosophies. CPUs are optimized for low latency and irregular control flow and data access patterns, with large on-chip memories and a decent number of execution units. GPUs are optimized for high throughput, regular control flow and data access patterns, with an extremely high number of execution resources and decent size on-chip memories. In the near future we will likely see tightly coupled CPU/GPU combos that reap the benefit of both worlds. One way to achieve this is to build processors from multiple dies (in a single package) which are sometimes called chiplets.
Thanks a lot, njuffa for your detailed answer.
Now I understand why they fix the warp size per SM. I hope NVIDIA provides some documents for their architectures in detail as references for a CUDA programmer at the same time a microprocessor architecture enthusiast like me since practical GPU programming is very dependent on its architecture.
If you’re not already aware of them, the NVIDIA architecture whitepapers may be an interesting resource. The most recent of these are very long documents, so depending on your background you might also want to look at older ones, which are shorter and cover some basic ideas in more detail. Here is the one for A100, for example.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
Why hostile mobs start to spawn then all of a sudden they just vanish? Minecraft 1.10.2
This is a very strange phenomenon hostile mobs will start spawning at night then all of a sudden vanish with out a trace within 2 days (Minecraft time). First they spawned regularly within the world I created then my game froze and crashed twice now when I go into the game no hostile mobs spawn at all and no crashing occurs.
FML Log
Modlist I am using
If this isn't the right place to have this answered then I will go to Minecraft forge to ask this.
Edit: Hmmmm...I'm going to check something out really quick and see if the problem is what I think it is.
Re-Edit: I found out the issue. It was that I had a mob spawn editor mod named NoMobsSpawnOnTrees. The spawns per tick was set to 0.0 so I set it to 20.0. So far hostile mobs do spawn at night now
Extra spawning tries per tick. This only applies to hostile mobs.
D:extraSpawningTries=20.0 <-- was set to 0.0 before
Make sure you set this in the settings if you want hostile mobs to spawn at night.
Instead of editing your question, why not answer it? Because right now, it looks like your question is unanswered when the problem has been fixed. Also, anyone having a similar issue will think your question is unanswered.
If it was not a mod pack, I would say that hostile mobs naturally despawn afer a period of time. If you move to far away from the mobs, they will despawn from that area and spawn in a dark areas somewhere around you. This point is made when testing mobs in a fortress and overworld. In the overworld, I have tested the theory that mobs despawn after you die. This is false. If you die near the spawn area or bed (if you have one), you will respawn and the mobs are still there. However, lets say you died in a fortress that is 2000 blocks away from where you spawned. After awhile of you not being near there, those mobs automatically despawn from that area just so they could spawn where you are. As for the never spawning at all, there are only 2 options. The first is that the light level is too high for them to spawn, and they instead gather in underground caves. The second is that, ultimately, you glitched your game This sometimes happens with save files if not properly shut down, and you may need to modify the file or just create a new world.
Part of the problem with answering your question is that it is unclear exactly what your question is.
"...hostile mobs will start spawning at night then all of a sudden vanish with out a trace within 2 days (Minecraft time)"
Your opening statement sounds like normal typical behavior. Hostile spawn and despawn based on ridged game rules. They only spawn further than 25 blocks away from the player, and less than 129 blocks from the player. once they are closer than 23 blocks to the player they cannot despawn. If the player moves more than 82 blocks from a hostile mob it will despawn. One of the ways to clear a small island for example, is to row away from the island then rapidly return before many hostiles can respawn. If the island is smaller than 26 blocks across nothing will be able to spawn on it while you are on it.
Welcome to Arcade SE. Indeed, this isn't clear (as Daniel G stated in his comment), but the OP made an Edit on his question stating that the problem is solved. Also, when you'll pass the 10 reputation threshold, you'll be able to post comment under the question to ask for clarification, such as "Is the island longer than 26 blocks?", avoiding asking for clarification in an answer.
It is something to do with your mods, like something to disable mob spawns
They are out of range and are despawning, like above there could be a mod that makes the despawn range shorter
|
STACK_EXCHANGE
|
Parent directory not resolving via relative path?
If you add a reference to a file in a parent directory, it will resolve as the full path to the file (ex If I have a tour in C:\brianrosamilia\parent\NewThing\, then adding C:\brianrosamilia\parent\my.Config should resolve as ..\my.Config and not the full path).
What happens when I add a tour step like that: it resolves as C:\brianrosamilia\parent\my.Config and clicking on the tour step will not work
What I'm expecting: it resolves as ..\my.Config and clicking on the tour step will work
The nice thing is you can actually just convert the path to relative yourself in the tour JSON as a workaround and everything will work. It even works if you subsequently add steps to that tour in the same file.
I'm assuming you agree with this on a technical level but, in my opinion, relative paths are great for things checked into source control so I feel this is the preferred behavior.
Btw I really appreciate this project. I am using it in a presentation tomorrow (wish me luck :) )
Hey! Yep I totally agree with you on this, this is just a bug :) I'll try to get this fixed in the next couple of days. In the meantime, I'd love to hear how your presentation goes tomorrow 👍
Hey @lostintangent. Didn't see anywhere else to give feedback so I'll do it here
I think everyone really enjoyed the code tour! And it helped me stay focused and made it so I didn't miss anything while presenting code (maybe even more important).
One suggestion that would have really helped me and I think would be amazing and really set this project ahead of traditional presentation tools :
I was glad to see the tour steps support markdown.
So when I type npm run storybook it shows how I expect it and how another developer would like to read it.
But it would be even cooler if typing that was translated to
▶️ npm run storybook
And clicking the play button would actually run the task. It was slightly cumbersome to use my (extremely loud) mechanical keyboard to show a few new npm tasks. It would've been great to just click a button and show them running. Anyway, not sure if you are thinking about adding language/platform specific tools, but I hope you do so that things like this are possible. Thanks again for this project 😄
@BrianRosamilia Yeah I definitely think this feature enhancement would make sense. Would you mind creating a new issue to track that seperately?
Also regarding your original request: is the parent folder a git repo? And NewThing and My.Config are child directories of it?
In my original example, the parent directory was the root of the git repository and the code tour was a couple of folders deeper in that project (because it was an introduction to react for a focused part of the codebase)
The same thing happens on my mac, with the latest version as well fyi. If I am in an azure function app deep in my repo and I create a code tour there and I add a step to something like my license or readme, it's an absolute path.
@BrianRosamilia Yep, I completely agree! I just wanted to verify that the tour was a sub-directory of the repo, since that would ensure that the relative path would be "stable" for other people when they clone the repo and try to play the tour back. I'll tackle this item ASAP 👍
The fix for this was just checked in and will be shipped later this afternoon 👍 Let me know if it works as expected. Thanks!
|
GITHUB_ARCHIVE
|
Custom parameters to the run method
Hello. Is there any way to pass parameters or custom arguments to the run methods? Is it possible to specify such custom parameters in the conditions in JSON?
The run methods have one arguments - c object, which has the data of the c.m and the c.s.
It would great to have an option to pass custom parameters based on the conditions in json, smth like the following (see "params" for the method "run")
"r01": {
"run": "method1",
"params": {
"param1": 1,
"param2": 2
},
"to": {
"r02": {
"all": [
{
"m": {
"$and": [
{
"type": 1
},
{
"number": 0
}
]
}
}
]
}
}
Please, advice. Thank you.
Hi, thanks for asking the question. Passing custom parameters is not supported. The facts/events that lead to the action execution as well as the context object can be retrieved using the 'c' parameter. For example (assuming you are using python):
with ruleset('risk1'):
@when_all(c.first << m.t == 'deposit',
none(m.t == 'balance'),
c.third << m.t == 'withdrawal',
c.fourth << m.t == 'chargeback')
def detected(c):
print('risk1-> fraud detected {0} {1} {2}'.format(c.first.t, c.third.t, c.fourth.t))
post('risk1', { 't': 'deposit' })
post('risk1', { 't': 'withdrawal' })
post('risk1', { 't': 'chargeback' })
I have found a simple way to pass parameters to the action methods. Basically, I use the action_name label in the format "method_name|parameter". Then parse the action_name and call the "method_name(c, parameter)". The parameter could be a json base64 encoded. Works fine.
@ruslanolkhovsky would you mind sharing your workaround on how to pass parameters? Thank you!
Hi @dcrespol
Sure. The idea is to use the action_name label for both the method name and parameters. You need to redefine the get_action() method of the Host object, and do not forget to pass the parameter to the methods to call.
from durable.lang import *
from . import actions
class Host(engine.Host):
def __init__(self):
super(Host, self).__init__()
def get_action(self, action_name):
def call(c):
# Parse parameters
delimiter = '|'
if delimiter not in action_name: # methods without parameters
action_method = action_name
p = None
else: # methods with parameters
action_method = action_name.split(delimiter)[0]
p = action_name.split(delimiter)[1]
# The p is a string, you can use base64 encode/decode here
# to get the params in a form of json like {p1: value1, p2: value2}
# or in any other format you prefer
c.s.label = action_method
# Get the method to call
# The actions is a module with methods that you can call
#
# def method1(c, p):
# print('method1 with params {}'.format(p))
# return
#
# def method2(c, p):
# print('method2 with params {}'.format(p))
# return
#
run = getattr(actions, action_method)
# Call the method
run(c, p)
return call
I hope it helps. Works like a charm for me.
Thank you, @ruslanolkhovsky. Works for me too!
Hi @dcrespol
Sure. The idea is to use the action_name label for both the method name and parameters. You need to redefine the get_action() method of the Host object, and do not forget to pass the parameter to the methods to call.
from durable.lang import *
from . import actions
class Host(engine.Host):
def __init__(self):
super(Host, self).__init__()
def get_action(self, action_name):
def call(c):
# Parse parameters
delimiter = '|'
if delimiter not in action_name: # methods without parameters
action_method = action_name
p = None
else: # methods with parameters
action_method = action_name.split(delimiter)[0]
p = action_name.split(delimiter)[1]
# The p is a string, you can use base64 encode/decode here
# to get the params in a form of json like {p1: value1, p2: value2}
# or in any other format you prefer
c.s.label = action_method
# Get the method to call
# The actions is a module with methods that you can call
#
# def method1(c, p):
# print('method1 with params {}'.format(p))
# return
#
# def method2(c, p):
# print('method2 with params {}'.format(p))
# return
#
run = getattr(actions, action_method)
# Call the method
run(c, p)
return call
I hope it helps. Works like a charm for me.
Thank you very much to both of you! I was on a more complex path.
|
GITHUB_ARCHIVE
|
I have a dog that will be able to get the our out for the next few days or deleted it from my perspective is that a lot of people that are all the same thing i not thinking about it thank so everybody bye (2019-02-08, v:3.4) Alexander Anisfeld: Forward to wiki2 also from mobile version of wiipedia
Could you please forward to wiki2 also from sites like https://en.m.wikipedia.org/wiki/Developed_environments i.e. - mobile versions? Sometimes I get those links. All the best, Alex. (2018-10-30, v:3.4) Sayan Dasgupta: Theme
IS there only one Theme pls. Refresh it after some Searches (2018-02-26, v:3.4) Zvonimir Bogovic: Other languages than English
Why are only pages in English displayed? When I choose another language, ordinary wikipedia is displayed. (2018-02-26, v:3.4) Zvonimir Bogovic: WIKI2 and Maxthon browser
How to install extension in Maxthon browser? (2017-07-01, v:3.4) 吳東林: Font
Hi, wiki2 I would like to use Newton Atmosphere background with the sans-serif type for better reading, how can I get there? There seems no setting button. (2016-10-11, v:3.3.3) Iair Rozenbom: Not load automatically
Hi, i used the extension without problems, but now, when i enter a link in the browser, opens the common wikipedia. I have the last version of Chrome insalled. Any suggestions? Thanks! (2016-08-27, v:3.3.3) Not Working
The extension is not working for me. My Chrome Version 52.0.2743.116 m. Whenever there is any wikipedia link, it just opens in normal wikipedia. (2016-05-15, v:3.3.1) Troy Lane: wiki 2
Stopped working (2016-04-24, v:3.2.1) Alf Bridge: You Tube encyclopedic videos
Nice layout, but how can the youtube encyclopedic videos be changed? some are not entirely relevant to the subject and I can find more appropriate ones manually. Alf.
Extpose is a service for Chrome extension publishers.
It helps tracking and optimizing browser extension performance in Chrome Web Store.
The most valuable features are available after creating an account.Learn more
If you are not an extension developer and want to install this extension please proceed to Chrome Web StoreGo to Chrome Web Store
|
OPCFW_CODE
|
Blocking specific user JWT tokens?
Say a user logged in multiple times from different devices, and then they decide they want to logout of device a, we have no way of deleting the JWT which was provided to that device right?
Here is what I've implemented, I'm not sure if this is how other sites do it or if it's a decent way of doing it.
User logs in
I create a redis session token, which has the userId + device name associated to it
I store this redis token as the subject of the JWT
I pass back the JWT.
Now that the user has a JWT, they can now access secured api endpoints. Lets say the user wanted to remove this session, here is what I've done.
User fetches * redis session tokens for the particular userId (of course they need a valid jwt to fetch this data)
They choose the redis session token which they want to destroy.
They send that token to a /destroy/{token} endpoint
The jwt which uses that has that token as the subject will not work anymore.
Doing it this way means on each request, I'll have to decompile the jwt, grab the redis token, and see if it still exists. I guess this isn't expensive todo at all using redis, or any other in memory DB.
Is this a solid/efficient way of doing this? Are there any better/easier ways of doing this?
With this setup, do you use a refresh token at all or only the JWT access token?
While implementing JWT authentication/authorization in several apps I also had this same question and reached the same solution if not a very similar one:
In my case, I would store the JWT + UserID + DeviceName in the database, and then I would have an HTTP Request
DELETE /logout/DeviceName with a header Authorization: JWTGoesHere.
This gives me two benefits:
I can now logout a user from any device using a valid JWT (it does not need to be exactly the same JWT, it only needs to be a JWT for that user).
Makes possible the implementation of "Logout all sessions except this one".
In terms of speed, the applications we've developed receive hundreds of requests per second.
More than 90% of these requests need to be authorized, which means checking that the JWT is syntactically valid, checking existence against the database and last but not least check if it's expired.
All these checks (using Redis as the database) take less than 10ms.
Bottom line is: Benchmark it, and if it doesn't take really long then it doesn't need any optimization.
Hope it helps!
Good to hear that someone else is thinking the same way I am. So you haven't actually created a redis session token and stored it in the JWT, you've actually just stored the whole JWT in a redis cache, interesting. I was thinking of doing this, but came to a conclusion that using a session token and validating that for validity on each request would be give me a couple added benefits for free. Happy to list them in the question when I have time.
@Franco Thank you for sharing your approach, indeed it helps :) I am wondering though, how do you get the DeviceName (is it something like Computer, iPhone, etc.. or does it have anything extra besides the base string)? Does the device detection occur on the server side? What package do you recommend? Sorry for all the question, but this will help me and others :)
@franco Doesn't that contradict the purpose of a JWT? The whole point of using a JWT is that I can validate it without a server-roundtrip, and now that roundtrip is back in the game.
If I understand it correctly, your approach would work as well for an opaque token and is not specific for JWT.
|
STACK_EXCHANGE
|
//
// CodableTest.swift
// CoinpaprikaAPI_Example
//
// Created by Dominique Stranz on 24/10/2018.
// Copyright © 2018 CocoaPods. All rights reserved.
//
import XCTest
import Coinpaprika
class CodableTest: XCTestCase {
let bitcoinId = "btc-bitcoin"
func testTickerEncodeDecode() {
let expectation = self.expectation(description: "Waiting for ticker")
Coinpaprika.API.ticker(id: bitcoinId, quotes: [.usd, .btc]).perform { (response) in
let bitcoin = response.value
XCTAssertNotNil(bitcoin, "Ticker should exist")
XCTAssert(bitcoin?.id == self.bitcoinId, "BTC not found")
XCTAssert(bitcoin?.symbol == "BTC", "BTC not found")
XCTAssert(bitcoin?[.btc].price == 1, "1 BTC value in BTC should be equal 1")
XCTAssert((bitcoin?[.usd].price ?? 0) > 0, "1 BTC value in USD should be greater than 0")
let encoder = Ticker.encoder
let encodedData = try? encoder.encode(bitcoin)
XCTAssertNotNil(encodedData, "Encoding shouldn't fail")
let decoder = Ticker.decoder
var decodedBitcoin: Ticker!
do {
decodedBitcoin = try decoder.decode(Ticker.self, from: encodedData!)
} catch DecodingError.dataCorrupted(let context) {
assertionFailure("\(Ticker.self): \(context.debugDescription)")
} catch DecodingError.keyNotFound(let key, let context) {
assertionFailure("\(Ticker.self): \(key.stringValue) was not found, \(context.debugDescription)")
} catch DecodingError.typeMismatch(let type, let context) {
assertionFailure("\(Ticker.self): \(type) was expected, \(context.debugDescription)")
} catch DecodingError.valueNotFound(let type, let context) {
assertionFailure("\(Ticker.self): no value was found for \(type), \(context.debugDescription)")
} catch {
assertionFailure("\(Ticker.self): unknown decoding error")
}
XCTAssertNotNil(decodedBitcoin, "Ticker should exist")
XCTAssert(bitcoin?.id == decodedBitcoin.id, "BTC not found")
XCTAssert(bitcoin?.symbol == decodedBitcoin.symbol, "BTC not found")
XCTAssert(bitcoin?[.btc].price == decodedBitcoin[.btc].price, "priceBtc \(String(describing: bitcoin?[.btc].price)) isn't equal \(decodedBitcoin[.btc].price)")
XCTAssert(bitcoin?[.usd].price == decodedBitcoin[.usd].price, "priceUsd \(String(describing: bitcoin?[.usd].price)) isn't equal \(decodedBitcoin[.usd].price)")
XCTAssert(bitcoin?[.btc].marketCap == decodedBitcoin[.btc].marketCap, "marketCapBtc \(String(describing: bitcoin?[.btc].marketCap)) isn't equal \(decodedBitcoin[.btc].marketCap)")
XCTAssert(bitcoin?[.usd].athDate == decodedBitcoin[.usd].athDate, "athDate \(String(describing: bitcoin?[.usd].athDate)) isn't equal \(String(describing: decodedBitcoin[.usd].athDate))")
expectation.fulfill()
}
waitForExpectations(timeout: 30)
}
}
|
STACK_EDU
|
If you run tasks that take a long time to complete, the default timeout period on the remote site might elapse before the task completes. You can configure additional timeouts to allow long-running tasks to finish.
About this task
A long-running task might be the test recovery or cleanup of a large virtual machine. If a virtual machine has large disks, it can take a long time to perform a test recovery or to perform a full recovery. The default timeout period monitors the connectivity between the sites, so if a task takes a longer time to complete than the default timeout period and does not send notifications to the other site while it is running, timeouts can result. In this case, you can add a setting in the vmware-dr.xml configuration file so that Site Recovery Manager does not timeout before a long-running task finishes.
By adding the
<RemoteManager><TaskDefaultTimeout> setting to vmware-dr.xml, you configure an additional timeout period for tasks to finish on the remote site. You can also configure a
<TaskProgressDefaultTimeout> setting, to extend the time that Site Recovery Manager gives to a task if it reports its progress at regular intervals.
If you configure a
<TaskDefaultTimeout> period, the default timeout does not cause tasks to fail, even if they take longer to complete than the period that the
<DefaultTimeout> setting defines. As long as Site Recovery Manager continues to receive task progress notifications from the remote site, long-running tasks such as test recovery or cleanup of large virtual machines do not time out.
The initial call to start a task is subject to the
<DefaultTimeout> setting. After they start, long-running tasks are subject to the
<TaskDefaultTimeout> setting. If a task has not finished when
<TaskDefaultTimeout> expires, the progress monitor checks whether the task has sent any progress notifications. If the task has sent notifications, the progress monitor applies the
<TaskProgressDefaultTimeout> setting to allow the task more time to finish. When
<TaskProgressDefaultTimeout> expires, the progress monitor checks for progress notifications again. If the task has sent progress notifications, the progress monitor gives the task more time. The sequence repeats until the task finishes or until it stops sending progress notifications.
- Log in to the Site Recovery Manager Server host.
- Open the vmware-dr.xml file in a text editor.
You find the vmware-dr.xml file in the C:\Program Files\VMware\VMware vCenter Site Recovery Manager\config folder.
- Locate the
<RemoteManager>element in the vmware-dr.xml file.
The default timeout for startign all tasks on the remote site is 900 seconds, or 15 minutes.
<RemoteManager> <DefaultTimeout>900</DefaultTimeout> </RemoteManager>
- Add a
<TaskDefaultTimeout>element inside the
<TaskDefaultTimeout>period to a number of seconds that is greater than the
<TaskDefaultTimeout>has no maximum limit.
<RemoteManager> <DefaultTimeout>900</DefaultTimeout> <TaskDefaultTimeout>2700</TaskDefaultTimeout> </RemoteManager>
- Add a
<TaskProgressDefaultTimeout>element inside the
<TaskProgressDefaultTimeout>must be at least 1/100th of the
<TaskDefaultTimeout>period. If you set a period that is less than 1/100th of the
<TaskDefaultTimeout>period, Site Recovery Manager silently adjusts the timeout.
<RemoteManager> <DefaultTimeout>900</DefaultTimeout> <TaskDefaultTimeout>2700</TaskDefaultTimeout> <TaskProgressDefaultTimeout>27</TaskProgressDefaultTimeout> </RemoteManager>
- Save and close the vmware-dr.xml file.
- Restart the Site Recovery Manager Server service to apply the new settings.
|
OPCFW_CODE
|
UPDATE APRIL 2018
It’s the right one, it’s the bright one, that’s blockchain technology. Xeline is a beginner-friendly wallet which makes using XEL, a cryptocurrency-driven grid of computation nodes, as easy as 1-2-3.
https://xeline.org for the wallet and roadmap!
Mainnet is launced on 17 june 2017. Litewallet is now available to redeem your XEL.
The new website: http://www.elastic.pw
Whitepaper: https://raw.githubusercontent.com/elastic-project/whitepaper/master/whitepaper.pdf (this will be revised)
Github (for more information): https://github.com/elastic-project
Github genesis block: https://github.com/elastic-project/genesis-block/blob/master/genesis-block.json
Bitcointalk in the beginning; https://bitcointalk.org/index.php?topic=1396233.0
Bitcointalk continue the discussion on a different thread; https://bitcointalk.org/index.php?topic=1957064.new#new
More discussion; https://talk.elasticexplorer.org/
You can now redeem your XEL right from the litewallet.
The Phase 4 of the Coin Competition is live! We will be more strict for coins using bots to make the competition more fair. The competition will also be held for 12 days! so be sure to support your coin and goodluck!
Here's a kick starter for this weekend to earn more points, And some high leverage task to get even more points Link
Can anyone please advise if circulating supply of 91,623,140 found in the following Explorer is correct? Does anyone know how many tokens were issued after donation time ended?
Also, can anyone please advise on the amount in USD raised during donation time?
Today there is great news. There is an update containing the future of how XEL will be used in distributed computing world. The new wallet is still in Test phase but it is nice and clear, and can be used by everyone.
1-2-3 Go and Explore Xeline.org
I just saw this topic from 7 months ago https://www.reddit.com/r/XEL/comments/6izyyu/launch_mainnet_useful_information/
and I couldnt write my question there because its archived.
What is the reason people connect their Bitcoin wallets and send signatures etc?
Is there a way to claim some free XEL? if so please tell me how. or was there an airdrop or a fork or something?
Elastic (XEL) is a crypto-currency driven infrastructure for decentralized computation. It is a free, non-commercial and independent open source project with contributors from around the world.
|
OPCFW_CODE
|
error: pathspec 'master' did not match any file(s) known to git
using svn2git for migrating svn to git but getting below error,
\MYProject\Demo>svn2git https://shwetakhai-w10.cybage.com:8443/svn/SVN2GIT/ --authors authors.txt -v
Running command: git svn init --prefix=svn/ --no-metadata --trunk='trunk' --tags='tags' --branches='branches' https://shwetakhai-w10.cybage.com:8443/svn/SVN2GIT/
Initialized empty Git repository in D:/MYProject/Demo/.git/
Running command: git config --local --get user.name
Running command: git config --local svn.authorsfile authors.txt
Running command: git svn fetch
Running command: git branch -l --no-color
Running command: git branch -r --no-color
Running command: git config --local --get user.name
Running command: git config --local --get user.email
Running command: git checkout -f master
error: pathspec 'master' did not match any file(s) known to git
command failed:
git checkout -f master
Had the same issue and svn2git didn't work for me at all.
I managed to migrate using the steps outlined here
https://gist.github.com/leftclickben/322b7a3042cbe97ed2af
Looks like git svn fetch failed. You can try to figure out why it failed by running it manually.
Current versions of GIT don't create a master branch anymore by default, but a main one. It might simply happen that your tooling is recent enough that svn2git doesn't support this change anymore.
https://about.gitlab.com/blog/2021/03/10/new-git-default-branch-name/
To add to the comment above, you may be able to get around this by running git config --global init.defaultBranch master and re-running svn2git. (make sure to change it back to main afterwards if that's something you care about)
I'm attempting this currently, but it'll be another 16 hours before I know if it was successful, since this error occurs after the git svn fetch. :)
It doesn't seem like my previous suggestion worked (after another 16 hours).
I forked this repo and changed it so that it looks for main now instead of master – I'm going to try that, and anyone else is welcome to try it too: https://github.com/zedseven/svn2git
Hi @zedseven,
I've test your forked repo on my svn repo successfully after I update git to the latest version and add --nobranches .
Thanks!!
Instead of getting into default branch name holy wars or a compatibility excretion contest by making a fragile assumption, I've found this pattern to work without modifying svn2git:
mkdir {{your_repo}}
cd {{your_repo}}
git init -b master
svn2git ....
This has the advantage of not modifying git global defaults or changing svn2git which would require multiple layers of up- and downstream release processes to arrive as a standard system package. A more robust fix would be for svn2git to check which branch it created rather than assuming it because it will break in environments where default branches are another choice.
|
GITHUB_ARCHIVE
|
C,UNIX. Sending output from execlp through a UNIX socket
I'm writing two programs (one client, one server) in C that communicate with each other through a UNIX socket. The idea is that the client sends a command to the server, like ls -l, the server creates a child (fork()) and the child does execlp(...,command,...) and the output from execlp is put in the client's terminal window.
However, as it is right now, the output from the commands I send to the server are written in the server's terminal window, not the client's. Is there a way to grab the output from execlp and send it through a socket with send(..,string,...) to the client?
I would like to stick to using sockets, not pipes (all the similar questions I've found have had answers suggesting pipes).
The previous answer was wrong; for some reason my mind was fixed on pipes. As Jonathan Leffler points out in the comments, you can achieve this more elegantly.
When a new connection comes up, fork a new child on which it waits
The child inherits the socket from the parent and the parent closes it
The child replaces its file descriptors using the socket:
dup2(sockfd, STDIN_FILENO); /* Check the return value for these. */
dup2(sockfd, STDOUT_FILENO);
dup2(sockfd, STDERR_FILENO);
The child execvps the new program, as requested by the client
Is there a reason not to rig the child ls so that its standard output (and standard error) go to the socket?
@JonathanLeffler Can you please clarify ?
I would expect the server to deal with the process by accepting the connection and forking a child to manage the request while the parent resumes listening. The child would fix standard input to come from the socket, and standard output and standard error to go to the socket, and could then simply execvp() the requested process. This will work when the client is on a different machine from the server, whereas pipes most definitely won't work across machines. Minor details open to discussion, but the overall thesis is "make the socket into the standard I/O streams and run the program".
@JonathanLeffler You are completely right. I don't have any explanation why I was so fixed on pipes.
First of all, thanks for the help, the output from execlp is now appearing in my client. A new problem that I have now is that if I start with "ls" it prints what it should print, but the following commands I enter prints the last 11 files from ls in addition to what the command should print. For example "ps" preceeded by "ls" gives: (output from ps, looks fine) ... 11 lines from ls. After every read from the socket I clear it (by copying 20 spaces with strcpy into the string) so I don't understand why it still prints those last 11 lines...
|
STACK_EXCHANGE
|
Certified Red Team Operator (CRTO) is a certification opportunity presented by ZeroPointSecurity. The certification ties directly to the Red Team Ops I course offering, which is a fundamental yet thorough introduction to maneuvering through an Active Directory environment and abusing misconfigurations with CobaltStrike and open-source tooling.
The course does an outstanding job covering the attack lifecycle, the CobaltStrike C2 framework and how to employ it in an engagement, common misconfigurations that can be abused in Active Directory environments, and opsec considerations when performing operations and evading Windows Defender. It is self-paced but offers great support via the course Discord, where RastaMouse (the course developer) and other students discuss course material and help brainstorm through questions and challenges encountered.
Prior to the course, I had completed AlteredSecurity's Attacking and Defending Active Directory course and achieved the CRTP. I completed this a year prior and had a gap in using the course material, so I did have to relearn some of the topics, but it definitely did help to have the previous exposure. Some light coding experience, especially in C# and PowerShell, would be beneficial but isn't a deal breaker if you're dedicated and diligent.
Lab time is offered as a subscription through SnapLabs. The lab is a critical part of the course; I highly advise going through all of the course material with Windows Defender disabled (the default setting of the lab), and afterward going through all of the course material and labs with Windows Defender enabled. The lab instructions are pretty clear and if something isn't, the Discord is there to support you.
The exam was a fun challenge. In my opinion, if you complete the course material and lab work with Windows Defender enabled and can achieve all of the objectives, you will be ok on the exam. Everything you need for the exam is covered in the course material.
Booking the exam is pretty simple; it is booked through the link here. There are plenty of timeslots available, some even same-day. Once you book your exam, you will immediately be provided with a threat profile that explains the adversary you will be emulating on the engagement along with other details.
As far as the exam experience goes, access to the lab environment starts precisely at the time you booked the exam for. You must find and submit flags to the SnapLabs dashboard and need at least 6 out of the 8 total flags to pass the exam. You are allotted 48 hours across 4 days to complete the exam objectives. You are able to pause the exam environment if needed.
The exam took me longer than I would've preferred, but ultimately I achieved the 6 flags I needed to pass. I started to work towards the 7th, but something went awry and things broke, so I called it good where I was at.
I highly encourage you to pace yourself, take breaks, eat, go for a walk, etc. Have a notepad/paper and pen available to sketch out what you're trying to achieve, it may be helpful. 48 hours is plenty of time to compromise the provided environment pending having prepared appropriately.
"Certified Pre-Owned: Abusing Active Directory Certificate Services" by Will Schroeder and Lee Christensen
"Delegating Like a Boss: Abusing Kerberos Delegation in Active Directory" by Kevin Murphy
|
OPCFW_CODE
|
In last week’s blog posting I introduced the basic concept and the problems behind Parameter Sniffing in SQL Server. As you have seen, it can lead to serious performance problems when a cached plan is blindly reused by SQL Server. Today I want to show you how you can deal with this problem, and how to overcome it using a number of different techniques.
The underlying root cause of the Parameter Sniffing problem the we discussed last week is the fact that in some cases the SQL statement produced a Bookmark Lookup and in some other cases produced a Table/Clustered Index Scan operator in the execution plan. If you are able to change the indexes in your database, the easiest way is to provide a Covering Non-Clustered Index for this specific query. In our case you have to include the additional requested columns from the Bookmark Lookup in the leaf level of the Non-Clustered Index. If you do that, you have also achieved so-called Plan Stability: regardless of the provided input parameters, the Query Optimizer always compiles the same execution plan – in our case an Index Seek (Non Clustered) operator.
If you don’t have a chance to work on your indexing strategy (maybe you deal with a 3rd party application, where you are not allowed to make indexing changes), you can work with a number of “transparent” SQL Server options that I will describe in the following sections.
The first option that SQL Server offers you is a recompilation of the execution plan. SQL Server provides 2 different options for you to use:
- A recompilation of the whole, complete stored procedure
- A recompilation of the problematic SQL statement – a so called Statement Level Recompilation (available since SQL Server 2005)
Let’s have a more detailed look at both options. The following code shows how you can apply a recompilation of the whole stored procedure with the RECOMPILE option.
-- Create a new stored procedure for data retrieval CREATE PROCEDURE RetrieveData ( @Col2Value INT ) WITH RECOMPILE AS SELECT * FROM Table1 WHERE Column2 = @Col2Value GO
When you run such a stored procedure, the Query Optimizer always recompiles the stored procedure at the beginning of the execution. Therefore you always get an execution plan which is optimized for the currently provided input parameter values. As a side-effect the execution plan no longer gets cached, because it doesn’t make sense to cache a query plan which is recompiled every time. When you have a large, complicated stored procedure a RECOMPILE query hint at the stored procedure level doesn’t always make sense, because your whole stored procedure is recompiled.
Maybe you have a Parameter Sniffing problem in just one specific SQL statement. In that case the overhead for the recompilation of the whole stored procedure would be too much. For that reason SQL Server, since version 2005, offers a so-called Statement Level Recompilation. You are able to mark a specific SQL statement for recompilation instead of the complete stored procedure. Let’s have a look at the following code.
-- Create a new stored procedure for data retrieval CREATE PROCEDURE RetrieveData ( @Col2Value INT ) AS SELECT * FROM Table1 WHERE Column2 = @Col2Value SELECT * FROM Table1 WHERE Column2 = @Col2Value OPTION (RECOMPILE) GO
In that example the second SQL statement is recompiled every time that the stored procedure is executed. The first statement is compiled during the initial execution, and the generated plan is cached for further reuse. That’s the preferred way to deal with Parameter Sniffing when you have no influence on the indexing strategy of your database.
In addition to the recompilation of a stored procedure or the SQL statement, SQL Server also offers you the OPTIMIZE FOR query hint. With that query hint you are able to tell the Query Optimizer for which specific parameter values the generated query plan should be optimized. Let’s have a more detailed look at the following example.
-- Create a new stored procedure for data retrieval CREATE PROCEDURE RetrieveData ( @Col2Value INT ) AS SELECT * FROM Table1 WHERE Column2 = @Col2Value OPTION (OPTIMIZE FOR (@Col2Value = 1)) GO
As you can see from the stored procedure definition, the execution plan of the SQL statement is always optimized for an input parameter value of 1 for the parameter @Col2Value. Regardless of which input value you provide for this parameter, you will always get a plan compiled for the value of 1. With this approach you are already working with a sledgehammer against SQL Server, because the Query Optimizer no longer has any choice – it must always produce a plan optimized for the parameter value of 1. You can implement this query hint when you know that a query plan optimized for a specific parameter value should almost always be generated. You will be able to predict your query plans when you restart SQL Server or when you perform a cluster failover.
If you are going to go down this route, you really have to know your data distribution, and you also need to know when your data distribution changes. If the data distribution changes, you also have to review your query hint to see if it’s still appropriate. You can’t rely on the Query Optimizer, because you just have overruled the Query Optimizer with the OPTIMIZE FOR query hint. You must always keep this in mind! In addition to the OPTIMIZE FOR query hint, SQL Server also offers the OPTIMIZE FOR UNKNOWN query hint. If you decide to use that query hint, the Query Optimizer uses the Density Vector of the underlying statistics object to derive the cardinality. The plan that the Query Optimizer generates for you depends on the data distribution. If your logical reads are over the Tipping Point, you end up with a Table/Clustered Index Scan…
In this blog posting I have shown you multiple ways to deal with the Parameter Sniffing problem in SQL Server. One of the most common root causes of this specific problem is a bad indexing strategy, where with a selective parameter value the Query Optimizer introduces a Bookmark Lookup into the execution plan. If such a plan gets reused in the future, your I/O costs will explode. I’ve already seen execution plans in production, which generated more than 100 GB of logical reads, just because of this problem. A simple RECOMPILE query hint on the statement level fixed the problem, and the query produced just a few logical reads.
If you can’t influence the indexing strategy of the database, you can work with the RECOMPILE query hint on the stored procedure or SQL statement level. As a side effect the recompiled plan will no longer be cached. In addition to these query hints, SQL Server also offers you the OPTIMIZE FOR and OPTIMIZE FOR UNKNOWN query hints. If you work with these query hints, you really have to know your data and your data distribution, because you are overruling the Query Optimizer. Be always aware of this fact!
Thanks for your time,
|
OPCFW_CODE
|
In the past, I already wrote a blog for awareness about the session date in Microsoft Dynamics. When you read this blog, you will learn about the purpose, but also that there are two options to change the session date in the standard application. However this is a great and helpful feature, some organizations might want to lock this option to prevent incorrect usage of this option and ensure people will post in the most recent period by default unless they would need to backdate a transaction. In this blog, I will demonstrate how to disable the option for all or particular end users.
The most obvious option to change the session date by end users is using the calendar control on the default dashboard. When someone clicks on a date, a confirmation box is prompted which will change the session date in memory.
There is no visualization on pages where users can enter transactions, like journal lines. The user might have set an incorrect date, might have forgotten which date he entered, or refreshed the browser which will reset the session date as it will then create a new session for the user. For that purpose, I did create a small tool where the session date can be pinned in the Dynamics 365 application. Still, you might want to lock the option for users to change it.
When someone will have an initial look at the page design of the default dashboard, he will learn that technically a container control is used which will show an extensible control. There is no form part used that will be managed by a menu item that can be secured, so initially, you would think disabling the calendar control would need a customization.
When you look at the second option for changing the session date and time (Common menu), you will notice this is managed via a menu item called SystemDate. When deep diving into the technical implementation of the calendar control and the option to change the session date, this same securable object is also considered in coding to show the confirmation box or not.
There are various ways to manage the access level of the Session date and time form and together with this also the calendar control. Full permissions for the session date are part of the System user role. I would not suggest changing the contents of this role. For sure not if there should be a differentiation to manage which users should not or would be able to change the session date in the same environment.
One option would be creating a new privilege with deny permissions for the Session date. You can then link this to an existing or new role directly or via a duty. As you can see in the example below, I denied all access. When users will get the deny permissions assigned via a security role, it will remove the option for the session date and time in the common menu. Also, it will not act anymore on the date selection in the calendar control.
There is more…
Initially, when I looked at the coding, I thought that still having the read access and deny update, create, and delete would be sufficient. As you can see in the screenshot below, permissions are checked for either delete, add and edit rights and not the option view. However, the method sysDictMenu.rights() only returns NoAccess or Delete. In case of view permissions the value Delete is returned otherwise NoAccess. When view permissions are granted, then still on the Session date and time form the user would be able to change the session date. For this reason, the most easy way is denying all access to this menu item.
I do hope you liked this post and will add value for you in your daily work as a professional. If you have related questions or feedback, don’t hesitate to use the Comment feature below.
That’s all for now. Till next time!
|
OPCFW_CODE
|
Adding a reference to TOMwrapper.dll in Excel 2013
Hi Daniel,
Thank you for this incredible tool !
I'm trying to add a reference to TOMwrapper.dll in Excel 2013 but I get this error :
Impossible to add a reference to the file
Adding the reference into Visual studio community 2019 works
Could you provide some more details on how you’re adding the file as a reference in Excel? I wasn’t even aware that it was possible to add .NET libs in Excel. Is it for VBA/macros?
Hello Daniel,
Yes it is inside VBA editor. In the Tools menu , I select References. Then I click on the third button to look for a dll.
https://docs.microsoft.com/fr-fr/office/vba/language/reference/user-interface-help/can-t-add-a-reference-to-the-specified-file
Is TOMwrapper.dll 32 or 64 bits ?
Have a nice day
----- Mail original -----
De: "Daniel Otykier"<EMAIL_ADDRESS>À: "otykier/TabularEditor"<EMAIL_ADDRESS>Cc: "Didier Terrien"<EMAIL_ADDRESS>"Author"<EMAIL_ADDRESS>Envoyé: Dimanche 29 Mars 2020 23:33:22
Objet: Re: [otykier/TabularEditor] Adding a reference to TOMwrapper.dll in Excel 2013 (#413)
Could you provide some more details on how you’re adding the file as a reference in Excel? I wasn’t even aware that it was possible to add .NET libs in Excel. Is it for VBA/macros?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub , or unsubscribe .
It's 32 bits, as is Tabular Editor.
But the problem is that the TOMWrapper.dll is a .NET library, which was not designed for COM interoperability, which is a requirement in order to load the DLL in VBA. It is not a trivial amount of work to redesign TOMWrapper in order to support this.
May I ask what it is you're trying to achieve with this integration?
Could you perhaps achieve the same thing by calling Tabular Editor's CLI with suitable arguments from your macro? For example, you could use VBA to generate a C# script that would update measure descriptions and DAX expressions or whatever, and then use the CLI to execute that script against a model from your macro.
Ok Daniel, that's clear.
My company is settled in 22 countries so we need to translate many reports into 5 languages minimum. On the other hand, I have a macro in Excel which translates automatically PowerPoint files. I begun to adapt this macro for Power BI objects and got already some results.
At this point I wonder what is the best solution for communication with the model :
SSAS API - it will take probably a lot of time for me understand how to use it
Tabular editor through TOMwrapper - not compatible
Tabular editor through - export / import translations - objects ids are missing
Tabular editor through custom actions - I tried but I'm not skilled enough with this interpreter so I suppose it will take a lot of time
Tabular editor through CLI - same than above
May be you know more possibilities ?
If you want some demonstration, we can have a short Skype meeting so you will understand the interrest to do it in Excel
I see, thanks for clarifying.
Option 1: You would face the same issue as for option 2. There's no COM interoperability in any of the API's available for SSAS, as far as I know. Maybe you can use some legacy OLEDB stuff, but then you'd have to execute TMSL/XMLA commands against the model, which is probably going to be very cumbersome and difficult.
Option 3: What object ID's are you referring to? Are you aware that you can load the exported JSON files into this tool?
Option 4 and 5: Feel free to ask specific questions on what you want the scripts/custom actions inside Tabular Editor to do, and I'll be happy to help. It's very easy to set a translated name through a script. For example, the following script assumes that you've already added a Danish culture to your model. When run, the script will set the translated names of all selected measures to <original measure name> in Danish. So if you selected a measure named [Reseller Sales], its translated name would be [Reseller Sales in Danish]:
foreach(var measure in Selected.Measures)
{
measure.TranslatedNames["da-DK"] = measure.Name + " in Danish";
}
Here's an introductory article to scripting in Tabular Editor. Here's a collection of useful sample scripts.
To execute a script from the command line, simply run Tabular Editor this way:
TabularEditor.exe "OriginalModel.bim" -S "MyScript.cs" -B "NewModel.bim"
This will load a model from a file called "OriginalModel.bim", execute the script called "MyScript.cs" and save the resulting model into a file called "NewModel.bim". You can find all CLI options here.
Closing this, but feel free to post new issues if you have any specific questions.
Thank you for your reply which is perfectly clear
Yes I know about SSAS Tabular Translator but it doesn't seem to be supported any more and there is no automatic translation.
The Ids could be for example the column Id in the model so if the column is renamed in the model, I will not consider it is a new column.
The script functionality in Tabular editor is absolutely amazing. I will try to manage with it but I didn't find yet an object ID. If I still don't find, I will create a new issue.
|
GITHUB_ARCHIVE
|
ANT has now been added as a collateral asset to UMA (see Github merge here). This enables ANT to be used in the creation of KPI options for the Aragon community. KPI options are synthetic tokens that will pay out more rewards if the KPI meets predetermined targets before a preset expiry date.
Nearly all DAOs created with Aragon are currently on Aragon v1. With the launch of Aragon v2, we would like to incentivise Aragon DAOs to transition from Aragon v1 to Aragon v2. To incentivise this upgrade, we’re proposing to use KPI options as a mechanism to accelerate the transition to Aragon v2.
The options would be distributed to all Aragon v1 DAOs immediately after the option creation. At the option expiry, Aragon v1 DAOs would then be able to redeem their options for the ANT collateral in the KPI option contract. The amount of collateral they can redeem will be dependent upon:
- The DAO’s proportional share of AUM (USD denominated) relative to all Aragon V2 DAOs at 30th June 2021.
- How quickly the DAO upgrades to Aragon v2 DAO relative to other v2 DAOs
The proposal below includes steps for an initial test of KPI options. Should it be successful, we’d propose increasing the collateral underlying the option to a much larger amount and expanding the use of KPI options to meet other targets deemed important by the community of ANT holders.
KPI option details
The contract start date would be on 7th April 2021, 1pm UTC and end of 30 June 2021 1pm UTC.
The Key Performance Indicator (KPI) would be total value of assets (AUM), denominated in USD in all V2 DAOs by the 30th June 2021.
The KPI options would be distributed to all Aragon V1 DAOs.
We propose to place the KPI option in a bonding curve to incentivise early adopters of Aragon v2 in favour of late adopters.
Ahead of running a vote on Snapshot for this to get the signalling input from ANT holders, I’d like to post a draft of the planned votes here for community review.
Vote 1 - Amount of ANT to be used as collateral in the option creation.
- 0 ANT (signals you’re not in favour of using KPI options)
- 50k ANT
- 100k ANT
- 150k ANT
Vote 2 - Based on the amount of collateral from vote 1, what upper threshold should we set for USD AUM in V2 DAOs by 30th June 2021
- 100 million
- 200 million
- 300 million
- 400 million
- 500 million
- 600 million
- 700 million
- 800 million
- 900 million
- 1 billion
|
OPCFW_CODE
|
I am trying to install QNX 6.1.0 on a notebook extensa
900CD (Graphic adapter: CT 65550); I have downloaded the ISO image;
the installation process gives no problems until I reboot.
At the first boot of the notebook, it freeze on crttrap with
this message on the screen:
Running crttrap, please wait…
/usr/photon/bin/devgt-iographics -dldevg-chips_hiqv.so -I0 -d0x102c,0x00e0
mode switcher init: Bad address
/usr/photon/bin/devgt-iographics -dldevg-vesabios.so -I0 -d0x102c,0x00e0
at this point if I poweroff the notebook and restart QNX, it starts in VGA mode
without touchpad, but serial mouse is ok.
Now I have discovered that if I start QNX pressing space when it boots, then F11
(enumerator disables), then F2 (Disable the plug and play ISA enumerator
QNX starts in SVGA mode, the “new video card has been detected” dialog appear
on the screen and I can select either svga or vesa mode, and even the touchpad is
ok; but when i restart QNX it freeze again as before.
If I modify /etc/system/enum/devices/graphics and put at the end of the file, after:
all # No PCI display device found
echo(devgt-iographics -dldevg-chips_hiqv.so, $fname))
If I restart QNX, pressing space + F11 +F2
It starts correctly with the acellerated CT65550 video driver but as before
it freeze the machine if I do not press F11 and F2
I have also tryed to modify the section about Chips and Technologies
in /etc/system/enum/devices/graphics replacing
echo(devgt-iographics -dldevg-chips_hiqv.so -I$(index) “-d0x$(ven),0x$(dev)”, $(fname))
echoi(devgt-iographics -dldevg-chips_hiqv.so, $(fname))
but as before it starts only if I press the F11+F2 on boot process
I have also tryed modifying the Unaccelerated VESA 2.00 section with no results.
I have also downloaded a fixed Chips and Technologies video driver from
QNX site; but it freezes exactly as before.
It look like the only way to start the chips_hiqv video driver is to disable the
ISA PNP enumerator when QNX boots.
pci -v command gives this information about display:
Class = Dispaly (VGA)
Vendor ID = 102ch, Chips And Technologies
Device ID = e0h, 65550 LCD/CRT controller
PCI index = 0h
Class Codes = 030000h
Revision ID = c6h
Bus number = 0
Device number = 6
Function num = 0
Status Reg = 280h
Command Reg = 83h
Header Type = 0h Single-function
BIST = 0h Build-in-self-test not supported
Latency Timer = 0h
Cache Line Size = 0h
PCI MEM Address = fd000000h 32bit length 16777216 enabled
Max Lat = 0ns
Min Gnt = 0ns
PCI Int Pin = NC
Interrupt Line = 0
someone can help me solving this problem ?
P.s. This notebook has also an ESS1878 audio device
(it looks like an ISA pnp device detected on address 0200,0388,0300 irq5 dma=0,3)
QNX 6.1.0 support this audio card ?
|
OPCFW_CODE
|
Small interval in responsivness
Hello,
I've noticed that when using neovim (I don't know about any other contexts) when moving the cursor to the bottom or up, when releasing the movement key, it takes a small amount of time before the cursor really stops. Is this the normal behaviour ?
Other than this everything is really smooth.
Some information:
OS: archlinux
kernel version: 4.17.6-1-ARCH
alacritty version: 0.1.0
I am using X11.
Thank you very much in advance
Hello,
Thank you for your answer. I never managed to record a good quality video (I suppose because I needed something like screenkey).
To be more precise, what I meant is that when the (for example) bottom arrow is pressed for a certain amount of time, when releasing it the cursor keeps moving. It doesn't stop immediately, it only stops after some time.
Thank you very much in advance, hoping I was clearer.
Unfortunately I can't reproduce this, it almost sounds like there's a build-up of events and Alacritty is unable to keep up.
Could you give me some more information about your system? GPU/CPU etc, just to judge if your computer is ancient or not. :D
Hello,
Thank you for your response.
So to answer your question, I am not using any gpu, only my i7-8700k.
Other than that, I am using gnome-shell as my DE.
Thank you very much in advance.
Hmm okay, so Intel integrated graphics. I don't think your system's too slow, it sounds like plenty of horsepower to me.
Could you share a screen recording with alacritty -vvv? Like that there shouldn't be any need for something like screenkey.
Hello,
So I managed to get a video. I hope it's okay.
Thank you very much in advance.
Is this reproducable with the default nvim config? You can test that using nvim -u NONE.
I think I am having the same / similar issue. I think it is related to handling keyboard input since I can't seem to reproduce with something that produces a lot of output.
Notice the jump in input towards the end of the gif.
I have a better quality video, but github only allows gifs ...
Let me know if there is other info I can provide to help.
I'd be interested in seeing if you can reproduce this with glutin itself.
For anyone interested in testing this, here are some instructions:
Clone https://github.com/tomaka/glutin
Change into the glutin directory and run cargo run --example window
Wait for compilation :clock1030:
Once the window opens, press lotta keys/hold keys
If this issue is related to glutin, I'd expect that the output in the window you ran this from would keep going after you stopped typing. I personally can't reproduce this issue, so for me the behavior when testing this is that as soon as I stop typing, there are no more events printed in the console.
If however glutin should stop spouting event messages as soon as you stop typing, this issue might actually be related to Alacritty itself.
Hello,
Sorry for the late response. So, even with nvim -u NONE, the problem persists.
I haven't tried yet with glutin.
Thank you very much in advance.
Hello,
I had the chance to test it with glutin, and it seems to be indeed coming from glutin. I am under the impression that output keeps going on even though I stopped typing.
Can anyone confirm this ?
Thank you very much in advance.
If this is a problem with glutin, it probably makes sense to report it upstream .
Thank you very much for your help, I opened an issue on the glutin repository.
I was originally posting in #2514 about something similar and just wanted to post my initial findings here. I'm on a late 2018 MacBook Pro @ 10.14.5
I cloned both winit and glutin masters and tried cargo run --example window on both. winit did not seem to exhibit the same problems with key repeats that I was seeing in alacritty. Unfortunately, I can't get the glutin example to run on my machine:
thread 'main' panicked at 'gles2 function was not loaded', /Users/casey/Sites/glutin/target/debug/build/glutin_examples-f84a4408dcde85c4/out/gl_bindings.rs:1158:13
Hope this helps. Like I said in the other comment I started noticing this after upgrading from macOS 10.13 to 10.14.
My 5 cents (macOS 10.11.6): was experiencing the same issue (e.g., in Vim, characters were continuing to be typed in even after I stop physically entering them) with alacritty 0.3.3 1067fa6 (to add some timeline, I cloned and built it on the 13th of September).
Updating to the currently latest version (alacritty 0.3.3 3475e44) almost solved the issue: some practically unnoticeable lag is still present (comparing with Terminal.app, for example) but the improvements are significant.
@for-coursera Could you try compiling the latest master from source? Looking at things, the upstream issue marks this as resolved, so I think if there are more problems it should be handled separately.
The latest master is 3475e44, which is the one I compiled (as I said in my comment above). Or am I missing something? :)
Oh, I completely overlooked the hash and thought you were talking about 0.3.3 since you mentioned that version explicitly, sorry.
I still think the original underlying issue is resolved, though if you still can notice this there might be other stuff involved. The upstream bug was strictly about X11, so there might be some weirdness going on with macOS too.
So just to clarify, if you hold down a key (let's say e), it will output at least one more e after you've already released the key, right? Even on the latest master (compiled in release mode). I'm asking since you've used the word "lag", just to make sure you're not talking about characters not streaming in consistently.
So just to clarify, if you hold down a key (let's say e), it will output at least one more e after you've already released the key, right?
That was my initial impression. But now after some additional testing I'm not really sure :) So, may be everything actually IS all right, and I was just a bit blind previously :)
It's obviously hard for me to judge since I'm not looking at it, but if you're saying that it has significantly improved on master, likely even resolved, then I don't think there's any further reason to take action.
If you notice any more problems, please just open a separate issue and we can investigate it separately, otherwise it's just going to be buried here forever.
then I don't think there's any further reason to take action.
Agreed. I was posting my initial comment more like a reference than an actual report (and since the issue was still open).
Thanks for your help, and sorry for this confusion :)
No worries, thanks for the feedback. Wouldn't have noticed that this could be closed without you.
|
GITHUB_ARCHIVE
|
Want to create your own World of Warcraft Cataclysm private server? We've got you covered! In this guide, we'll walk you through the process of compiling and setting up your server.
For this tutorial we will be using a source maintained by: The-Cataclysm-Preservation-Project
Step 1: Software RequirementsPlease make sure to read the software installment section to ensure you install the software correctly.
- Git v2.39.0
- Visual Studio 2019
- VS 2019 (any edition) no longer includes the C++ compiler as part of the default installation. You will need to include it during the installation process as shown in this picture: Image Here
- MySQL Installer 8.0.31
- Boost 1.73
- Important information on how to setup boost can be found here
- cMake 3.25.1
- OpenSSL 1.1.1s
Step 2: Software InstallmentGit:
Installing Git is a straightforward process. Simply follow the prompts during the installation and click "next" until the installation is complete.
Visual Studio 2019:
Installing Visual Studio 2019 is also an easy process. However, make sure to select the option to install "Desktop C++" during the installation, as this is a requirement for compiling TrinityCore.
Once the installation is complete, you do not need to sign in. Instead, click "Not now, maybe later" and select your preferred theme before clicking "Start Visual Studio". You can exit Visual Studio for now, as it is not needed for the next step.
Installing MySQL is easy. When you first open the installer, you will be prompted to upgrade, simply click "Yes". On the next screen, select the option to "install server only.
You will need to make a MySQL root password, remember this information as it will be important later on
Anything after setting the MySQL password, you can just click next until you reach the last page and then click execute and finish. We have now installed MySQL.
HeidiSQL is simple to install, just open the installer and click next and complete the install.
We have a complete tutorial on how to install and setup boos 1.73.0, please read it if you have never done it before. You can that tutorial here: Click Link Remember for this tutorial, you will need boost 64bit since all the tools we have downloaded is for 64bit.
Installing cMake is easy, no extra steps required. Simply click and install it.
Installing OpenSSL is easy, no extra steps required. Simply click and install it.
We now have all the tools required to compile, let’s move onto the next step.
Step 3: Cloning Trinity Core 4.3.4
To clone the source from GitHub, we will use the Git Bash tool on your desktop. Begin by right-clicking on an empty space on your desktop and selecting “Git Bash Here.”
After the menu appears, enter or copy the following line into the window:
git clone https://github.com/The-Cataclysm-Preservation-Project/TrinityCore.git
Step 4: Prepare The Source
We will need to create a new folder on our desktop called "Build", you can name this anything but for this tutorial we will name it Build.
We will now open CMake and select our Trinity Core 4.3.4 source folder and select our Build
Once you have selected the two folders, you will need to click on Generate and select our compiler. We will need to select Visual Studio 16 2019 and then click "Finish".
If you followed the tutorial to this point, you should have no errors and see this:
Step 5: Compiling
We are now ready to compile our source code. You can exit out of CMake, and open the Build folder. Click on ALL_BUILD and let it open Visual Studio for you.
Now that Visual Studio has our source code open, we will need to change it from debug mode to release mode.
We are now ready to compile our source code, you can start the process by hitting F5 or right clicking your source code and then build.
You should have no errors during compiling and see this once its completed:
Step 6: Getting Required Files & Renaming Configuration Files
Now that we have successfully built our source code, our files will be located in Build->Bin->Release. We will need to rename our configation files.
Rename authserver.conf.dist to bnetserver.conf
Rename worldserver.conf.dist to worldserver.conf
Now we will need to grab a few libraries from the programs we installed at the start of our tutorial. Please Note: Copy these files from these locations and not remove them completely.
You may find this library in: C:\Program Files\MySQL\MySQL Server 8.0\lib
libssl-1_1-x64.dll & libcrypto-1_1-x64.dll
You may find these two libraries in: C:\Program Files\OpenSSL-Win64\bin
Step 7: Extracting Data From Game Client
We will need to extract the data from our WoW 4.3.4 game client. These files are required to successfully start your worldserver.
You will need to copy the following files from your Build->Bin->Release into your WoW 4.3.4 client(The folder where your wow.exe is stored).
Next, navigate to your TrinityCore folder (the one you obtained from GitHub, not your build folder) and go to the “TrinityCore/contrib” subdirectory. Copy the extractor.bat from there and paste it into your WoW 4.3.4 client folder.
Now start it up, and choose option 4 to extract all the data from the game client. Important: This may take a couple of hours, time varies depending on computer resources.
Once the data has been successfully extracted, you will need to copy the following folders over to your Build->Bin->Release folder.
Step 8: Populating The Database Automatically
We will now proceed to populate the databases. This is a process that TrinityCore performs to set up the databases and apply all the latest database fixes automatically.
Connecting to your database:
We are now ready to connect to our database, open HeidiSQL.
Running Query To Create Databases And User
We will need to run the queries below to our MySQL server to create the username trinity, and the databases Auth, World, Characters And Hotfixes.
CREATE USER 'trinity'@'localhost' IDENTIFIED BY 'trinity' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0; GRANT USAGE ON * . * TO 'trinity'@'localhost'; CREATE DATABASE `world` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE DATABASE `characters` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE DATABASE `auth` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE DATABASE `hotfixes` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; GRANT ALL PRIVILEGES ON `world` . * TO 'trinity'@'localhost' WITH GRANT OPTION; GRANT ALL PRIVILEGES ON `characters` . * TO 'trinity'@'localhost' WITH GRANT OPTION; GRANT ALL PRIVILEGES ON `auth` . * TO 'trinity'@'localhost' WITH GRANT OPTION; GRANT ALL PRIVILEGES ON `hotfixes` . * TO 'trinity'@'localhost' WITH GRANT OPTION;
To proceed, we need to first download the database from the Github repository specified in this tutorial. Please navigate to the release section of The Cataclysm Preservation Project on Github(here), and download the most recent version of the world database as shown in the image below:
We will need to extract these two SQL files to our Build->Bin->Release folder as shown below:
Now open the worldserver.exe, it should ask you if you want to create the hotfixes database, simply type yes.
Afterwards it should automatic detect that we have not setup any of our other databases and start to populate those automatically using the files from the source and the world databse we just downlaoded.
Please Note: Populating the database may take a few minutes depending on your computer specs. But after it is done, there should be no errors and the world server should start right up.
Congratulations you have now successfully compiled your own Cataclysm 4.3.4 Private Server!
Now we covered all the steps to compile, connecting to your new Private Server can be tricky due to the way the source handles authentication. But no worries, you can follow the two links below to learn more about getting the right client setup.
My aim in creating these tutorials is to help newcomers learn the fundamentals of working with WoW Emulation. Thank you for taking the time to read my tutorial, and I sincerely hope you find it helpful.
If you have any questions or comments, please don't hesitate to contact me on Discord: privatedonut
|
OPCFW_CODE
|
sed command does not work when a comma is present
New to sed, so please bear with me...
I have a php file which contains the following line:
define('TARGET_A','044');
Id like to find that line and replace it with the following using sed:
define('TARGET_K','076');
I have tried:
$ sed -i 's/define\(\'TARGET_A\',\'044\'\)\;/define\(\'TARGET_K\',\'076\'\)\;/' myfile.php
I have tried SEVERAL variations, tried escaping the parens and removing the semicolon, nothing seems to work
ANY help at all GREATLY appreciated, thanks
You can't escape 's in a '-delimited script so you need to escape back to shell with '\'' whenever you need a '. You might be tempted to use " to delimit the script instead but then you're opening it up to shell variable expansion, etc. so you need to be careful about what goes in your script and escape some characters to stop the shell from expanding them. It's much more robust (and generally makes your scripts simpler) to just stick to single quotes and escape back to shell just for the parts you NEED to:
$ sed 's/define('\''TARGET_A'\'','\''044'\'');/define('\''TARGET_K'\'','\''076'\'');/' file
define('TARGET_K','076');
That's a lot of escaping. How about... no escaping at all?
sed -i '.bak' "s/define('TARGET_A','044');/define('TARGET_K','076');/" myfile.php
Example:
cternus@astarael:~⟫ cat myfile.php
define('TARGET_A','044');
cternus@astarael:~⟫ sed -i '.bak' "s/define('TARGET_A','044');/define('TARGET_K','076');/" myfile.php
cternus@astarael:~⟫ cat myfile.php
define('TARGET_K','076');
THANK YOU SINCERELY for this FAST answer! I can't even accept it yet you answered so quickly! hah! you ROCK. This worked!
heres a tougher question... Is there a way to auto increment the number? with or without that leading 0 present?
@dsrupt no, that can't be done with sed but it would be trivial if you were using awk instead of sed. That's really a very different requirement though so post a follow up question with concise, testable sample input and expected output and this time show your target line(s) in context.
This worked for me:
$sed -i "s/define('TARGET_A','044');/define('TARGET_K','076');/" myfile.php
I changed the argument string delimiter to make it simpler.
|
STACK_EXCHANGE
|
I’ve been seeing this over the past few years, imagine this scenario:
You have a stored procedure that runs well most of the time but sometimes it’s WAYYYYY off. It’s almost as though the performance of it went from great to horrible in a split second (like falling off of a cliff). You don’t know why but someone says – it’s got to be the statistics. In fact, if you have the luxury of time (which most folks don’t have), you execute it yourself and you check the plan – WOW, the estimated number of rows is WAY off from the actual rows. OK, it’s confirmed (you think); it’s statistics.
But, maybe it’s not…
See, a stored procedure, a parameterized statement executed with sp_executesql and prepared statements submitted by clients ALL reuse cached plans. These plans were defined using something called parameter sniffing. Parameter sniffing is not a problem itself – but, it can become a problem for later executions of that same statement/procedure. If a plan for one of these statements was created for parameters that only return 1 row then the plan might be simple and straightforward – use a nonclustered index and then do a bookmark lookup (that’s about as simple as it can get). But, if that same sp_execute statement/procedure/prepared statement runs again later with a parameter that returns thousands of rows then using the same plan created by sniffing that earlier parameter then it might not be good. And, this might be a rare execution. OR, it could be even more strange. These plans are not stored on disk; they are not permanent objects. They are created any time there is not already a plan in the cache. So, there are a variety of reasons why these plans can fall out of cache. And, if it just so happens that an atypical set of parameters are the first ones used after the plan has fallen out of cache (better described as “has been invalidated”) then a very poor plan could end up in cache and cause subsequent executions of typical parameters to be way off. Again, if you look at the actual plan you’ll probably see that the estimate is WAY off from the actual. But, it’s NOT likely to be a statistics problem.
But, let’s say that you think it is a statistics problem. What do you do?
You UPDATE STATISTICS tablename or you UPDATE STATISTICS tablename indexname (for an index that you specifically suspect to be out of date)
And, then you execute the procedure again and yep, it runs correctly this time. So, you think, yes, it must have been the statistics!
However, what you may have seen is a side-effect of having updated statistics. When you update statistics, SQL Server usually* does plan invalidation. Therefore, the plan that was in cache was invalidated. When you executed again, you got a new plan. This new plan used parameter sniffing to see the parameters you used and then it came up with a more appropriate plan. So, it probably wasn’t the statistics – it was the plan all along.
So, what can you do?
First, do not use update statistics as your first response. If you have a procedure that’s causing you grief you should consider recompiling it to see if you can get a better plan. How? You want to use sp_recompile procedurename. This will cause any plan in cache to be invalidated. This is a quick and simple operation. And, it will tell you whether or not you have a recompilation problem (and not a statistics problem). If you get a good plan then what you know is that your stored procedure might need some “tweaking” to its code. I’ve outlined a few things that you can use to help you here: Stored procedures, recompilation and .NetRocks. If that doesn’t work, then you MIGHT need to update statistics. What you should really do first though is make sure that the compiled value of the code IS the same as the execution value of the code. If you use “show actual plan” you can see this by checking the properties window (F4) and hovering over the output/select.
This will confirm that the execution did (or did not) use those values to compile the plan. If they were the correct values then you might have a statistics problem. But, it’s often blamed and it’s not actually the problem. It’s the plan.
OK, there’s a bit more to this…
*Do plans ALWAYS get invalidated when you update statistics? No…
And, also here: Statistics and Recompilation, Part II.
Here’s a quick summary though because it looks like things have changed again in SQL Server 2012…
- In SQL Server 2005, 2008 and 2008R2 – updating statistics only caused plan invalidation when the database option auto update statistics is on.
- In SQL Server 2012 – updating statistics does not cause plan invalidation regardless of the database option.
So, what’s the problem? Ironically, I kind of like this. I think that statistics has been blamed all too often for statement/plan problems when it’s not the statistics, it’s the plan. So, I like that there will be fewer false positives. But, at the same time, if I update statistics off hours, I DEFINITELY want SQL Server to invalidate plans and re-sniff my parameter (especially if the data HAS changed) and possibly get new plans from my updated stats.
In the end, I did chat with some folks on the SQL team and yes, it looks like a bug. I filed a connect item on it here: https://connect.microsoft.com/SQLServer/feedback/details/769338/update-statistics-does-not-cause-plan-invalidation#.
UPDATE – 12:55 (yes, only 2 hours after I wrote this).
It’s NOT a bug, it’s BY DESIGN. And, it actually makes sense.
If the plan should NOT be invalidated (directly due to statistics because the data has NOT changed) then it won’t. But…
If the plan should be evaluated (statistics have been updated AND data changed) then it will.
The key point is “data changed.” An update statistics ALONE will not cause plan invalidation (which is STILL different behavior from 2005/2008/2008R2) but it’s the best of both worlds IMO. Only if at least ONE row has been modified then the UPDATE STATISTICS *WILL* cause plan invalidation.
UPDATE 2: The key point is that there might still be some false positives and I’d still rather than people try sp_recompile first but it’s good that UPDATE STATISTICS will cause plan invalidation. But, it’s still a tad different than prior versions… interesting for sure.
A simple workaround is to use sp_recompile tablename at the end of your maintenance script but be aware that running an sp_recompile against a TABLE requires a schema modification lock (SCH_M). As a result, this can cause blocking. If you don’t have any long running reports (or long running transactions) at that time though, it should be quick and simple.
And, stay tuned on this one. In a later CU you should be able to remove the sp_recompile AND you won’t need to worry about the database option either (yeah!).
Thanks for reading,
|
OPCFW_CODE
|
Evolutionary Novelty in a Butterfly Wing Pattern through Enhancer Shuffling
Baxter, Simon W
Hanly, Joseph J
Dasmahapatra, Kanchon K
McMillan, W Owen
MetadataShow full item record
Wallbank, R., Baxter, S. W., Pardo-Díaz, C., Hanly, J. J., Martin, S., Mallet, J., Dasmahapatra, K. K., et al. (2016). Evolutionary Novelty in a Butterfly Wing Pattern through Enhancer Shuffling. PLoS Biology, 14 (e1002353)https://doi.org/10.1371/journal.pbio.1002353
An important goal in evolutionary biology is to understand the genetic changes underlying novel morphological structures. We investigated the origins of a complex wing pattern found among Amazonian Heliconius butterflies. Genome sequence data from 142 individuals across 17 species identified narrow regions associated with two distinct red colour pattern elements, dennis and ray. We hypothesise that these modules in non-coding sequence represent distinct cis-regulatory loci that control expression of the transcription factor optix, which controls red pattern variation across Heliconius. Phylogenetic analysis of the two elements demonstrated that they have distinct evolutionary histories and that novel adaptive morphological variation was created by shuffling these cis-regulatory modules through recombination between divergent lineages. In addition, recombination of modules into different combinations within species further contributes to diversity. Analysis of the timing of diversification in these two regions supports the hypothesis of introgression moving regulatory modules between species, rather than shared ancestral variation. The dennis phenotype introgressed into H. melpomene at about the same time that ray originated in this group, while ray introgressed back into H. elevatus much more recently. We show that shuffling of existing enhancer elements both within and between species provides a mechanism for rapid diversification and generation of novel morphological combinations during adaptive radiation.
This work was funded by BBSRC grant H01439X/1, ERC grant MimEvol and ANR grant HybEvol to MJ.
European Research Council (339873)
External DOI: https://doi.org/10.1371/journal.pbio.1002353
This record's URL: https://www.repository.cam.ac.uk/handle/1810/252540
Creative Commons Attribution 4.0 International License
Licence URL: http://creativecommons.org/licenses/by/4.0/
|
OPCFW_CODE
|
using System;
using DataStructures.Trees;
using Xunit;
namespace DataStructures.Tests.TreesTests
{
public class TreeTests
{
[Fact]
public void Add_Binary_Tree_can_add_multiple_nodes()
{
// Arrange
BinarySearchTree tree = new BinarySearchTree();
// Act
tree.Add(6);
//Assert
Assert.Equal(6, tree.Root.Value);
}
//[Fact]
//public void Breadth_first_test()
//{
// BinaryTree tree = new BinaryTree();
// tree.Root = new BinaryTree.Node(1);
// tree.Root.Left = new BinaryTree.Node(2);
// tree.Root.Right = new BinaryTree.Node(3)
// {
// Left = new BinaryTree.Node(4),
// Right = new BinaryTree.Node(5),
// };
// var result = tree.BreadthFirst();
// Assert.Empty(result);
//}
}
}
|
STACK_EDU
|
Floppy Disk Notes
What follows is a set of links that I've collected over the years that
give technical information on how to encode/decode floppy disks signals, as
well as the theory of operation behind this dying medium.
I find reading these documents important for preservation purposes,
especially since computer programmers of the past relied on these engineering
details for copy protection purposes.
Additionally, a deep understanding of internals may one day help to recover
data that is thought to be lost using statistical analysis of the read signal.
Links are ordered relatively in the order I read them/I recommend reading
them, and sections tend to build upon each other.
- The Floppy User Guide
- A good technical overall technical description of how a floppy drive accesses
- SA800/801 Diskette Storage Drive Theory of Operations
- Without question, the most important document on this list.
If you read any document, read this. It's not quite enough information
to build a floppy drive from scratch, but it's enough to bring someone
interested up to speed. Hard to believe this document is 40 years old in 2016!
- SA850/SA450 Read Channel Analysis Internal Memo
- This internal memo donated by a Shugart employee includes a
floppy drive read head transfer function analysis based on experiments
Shugart did in the late 70's.
Phase-Locked Loops (PLLs)
- Phaselock Techniques, Floyd M. Gardner
- A monograph on analog PLLs. Does not discuss All-Digital PLLs (ADPLLs).
- NXP Phase Locked Loops Design Fundamentals Application Note
- A quick reference for analog PLL design.
- Floppy Disk Data Separator Design Guide for the DP8473
- To be written.
- Encoding/Decoding Techniques Double Floppy Disc Capacity
- Gives background on more complicated physical phenomenon associated with floppy drive recording, such as magnetic domain shifting.
- Floppy Data Extractor
- A schematic for a minimum component data separator that does not require a
PLL, but uses a digital equivalent. Perhaps a simple ADPLL?
- IBM's Patent for (1,8)/(2,7) RLL
- I'm not aware of any floppy formats that use (2,7) RLL, but hard drives
that descend from MFM floppy drive encodings do use RLL. RLL decoding is far
more involved than FM/MFM.
This is a format used by Apple II drives and descendants. Software has
more control over this format, so there are more opportunities for
elaborate data protection compared to the IBM platforms. TODO when I have
time to examine non-IBM formats.
IBM 3740 (FM, Single Density)
TODO. Described in Shugart's Theory of Operations manual.
IBM System 34 (MFM, Double Density)
TODO. Described in various documents on this page, but I've not yet found
a document dedicated to explaining the format.
Floppy Disk Controller ICs
- 765 Datasheet
- The FDC used in IBM PCs. It is not capable of writing raw data at the level
of the IBM track formats. Thus, attempting to write copy-protected floppies
is likely to fail with this controller.
- 765 Application Note
- NEC created an application note to discuss how to integrate the 765 into a
"new" system, either using DMA or polling on receipt of interrupts.
- TMS279X Datasheet
- Includes a diagram of the IBM System 34 track format.
- DP8473 Datasheet
- A successor to the 765 that is capable of handing formats such as 1.2MB High
Density (HD) disks
- Design Guide for DP8473 in a PC-AT
- TODO. It appears I lost my original commentary on this document.
Floppy Disk Controller Cards
- IBM PC FDC Card (765)
- Includes schematics. The PLL circuit on the last page is in particular worth
If anyone has any interesting new documents to add, please feel free
to contact me, and I will add them to this page with credit!
Last Updated: 2022-04-30
|
OPCFW_CODE
|
Q: Is there any way to tell what other sites are linking to my pages?
- Curious in Castro Valley
are a couple of ways to do this. The least effective (but easiest and fastest) way is to
use a search engine. Both AltaVista and HotBot allow you to search for links in
their databases. The second and most effective method uses something called the referer log, which is kept by most Web servers.
To see which pages in the HotBot database are linked to www.eff.org, you just have to type link:www.eff.org into the search box. It's that easy.
A referer log keeps track of what page a user was reading immediately
before coming to your site. Usually, this means that there's a link to your site from that
page. Most Web servers keep referer logs, though the log's exact syntax
varies from server to server. (Editor's note: Apparently, the engineer who
coined the phrase "referer log" didn't know how to spell it.)
I'm going to explain the referer log generated by the default logging module of Apache, the server software we use at HotWired.
A referer log looks like this:
http://www.blah.com/index.html -> /story/index.html
http://www.svelt.com/burn/ -> /icns/wow.gif
http://www.mom.com/ippy/ -> /index.html
http://www.meep.com/trash/ -> /so/cool.html
That's nice, but what does it mean?
The syntax of a referer log reads like this:
<pointing page> -> <page pointed to>
So, http://www.svelt.com/burn/ -> /icns/wow.gif means there's a link on the page http://www.svelt.com/burn/ that points to /icns/wow.gif.
So, what are some neat tricks I can do?
Well, if you have access to the referer log, then you probably have access to a Unix box, with its array of text utilities. Here are a couple of common referer-log munging techniques. Each of these is a command (or several commands piped together) that should be typed on a Unix command line.
For the purposes of this demonstration, assume the referer log's filename is ref_log.
What pages link to me?
The command: sort ref_log | cut -d- -f 1 | uniq will return a list of every site mentioned in your referer log.
What it does:
Alphabetizes the list (needed for uniq, later).
cut -d- -f 1
Drops everything after the -> in the log, so you just get a list of who is linking to you, and not the pages they're linking to.
Unique. Deletes duplicate lines from an already sorted list.
How many times has someone been referred from a particular site?
The command: grep www\.meep\.net ref_log | wc -l will return the number of times the site www.meep.net appears in your referer log.
What it does:
grep www\.meep\.net ref_log
picks out lines in the file that contain "www.meep.net" (you need to put a backslash in front of the "." character in grep).
Counts the number of lines (one hit = one line).
Who is linking to a page other than /index.html, and where are they linking to?
The command: grep -v \ /index.html$ | sort | uniq | less
What it does:
grep -v \ /index.html$
Get lines that don't end in "index.html." The v means get
lines that don't match. The $ means carriage return, indicating that you're only
looking at the ends of lines.
sort | uniq
Put the list in order, and throw away duplicates.
Look at the list one page at a time.
That's just the beginning of what you can do to manipulate your referer log. Combinations of these commands can be used to produce almost any kind of output.
|
OPCFW_CODE
|
ANN: xterm patch #225
dickey at radix.net
Sat Mar 24 10:36:17 PDT 2007
Patch #225 - 2007/3/24
* add useClipping resource to allow clipping to be disabled.
* use XftDrawSetClipRectangles to work around Xft pixel-trash
(report by Reuben Thomas).
* add configure option --enable-tcap-fkeys, and resource
tcapFunctionKeys, which can be used to tell xterm to use
function-key definitions from the termcap (or terminfo) which it
uses to set $TERM on startup.
* add resources altIsNotMeta and altSendsEscape to allow one to use
Alt-keys like the meta-key even if they are bound to different
keycodes (prompted by discussion with Daniel Jacobowitz).
* revert a change from patch #216 that unnecessarily made the
meta modifier override the eightBitInput resource if the alt- and
meta-modifiers happened to overlap (report/patch by Daniel
* correct associated font for active icon for colored text (broken
in patch #224).
* correct ifdef's for Darwin (patch by Emanuele Giaquinta).
* add highlightTextColor resource, and options -selfg, -selbg like
xwsh (adapted from patch by Victor Vaile).
* revise find_closest_color() function to address concern about
borrowing from Tcl/Tk (request by Dan McNichol).
* add "spawn-new-terminal" action, which can be assigned to key
translation, allowing one to spawn a new copy of xterm using the
current process' working directory (adapted from patch by Daniel
* improve select/paste between UTF-8 and Latin1 xterms by adapting
the translations from patch #185. Extend that to include
Unicode fullwidth forms FF00-FF5E. Also modify select/paste of DEC
line-drawing characters in Latin1 mode to use ASCII characters.
* add "Enable Bell Urgency" to VT Options menu, removed "Enable
* add bellIsUrgent resource to control whether the Urgency hint is
* modify to set Urgency window manager hint on bell, reset it on
Focus-In event (patch by Emanuele Giaquinta).
* add --disable-setgid configure option (request by Miroslav
* fix a possible infinite loop in last change to dabbrev-expand()
(patch by Emanuele Giaquinta).
* modify initialization to set the pty erase value if the erase is
set in the ttyModes resource. This overrides the ptyInitialErase
setting (request by Lluis Batlle i Rossell).
* add initialFont resource to xterm widget, like tek-widget (Debian
* amend change to boldMode from patch #223 for Debian #347790.
As noted in Debian #412599, that made xterm no longer match the
documented behavior. Add new resource alwaysBoldMode to allow
overriding the comparison between normal/bold fonts when deciding
whether to use overstriking to simulate bold fonts.
* restore background color in ClearCurBackground(), omitted in
changes for patch #223 (report by Miroslav Lichvar).
* correct logic for repainting double-width TrueType characters
(prompted by test-case for Novell #246573).
* add a check to avoid trying to repeat a multibyte character
(report by Sami Farin).
* modify parameter to XftNameParse() to select wide face-name as
needed, to make -fd option work (patch by Mike Fabian, Novell
* correct logic for mouse highlight tracking's abort sequence,
broken in a restructuring modification from patch #224
(report by Thomas Wolff).
* revert the simplification of blinking cursor, since that broke the
xor'ing introduced in patch #193 (report by Thomas Wolff).
Thomas E. Dickey
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 228 bytes
Desc: not available
More information about the xorg
|
OPCFW_CODE
|
Data science is an exciting area to work in, with skilled data scientists much in demand. Using advanced analytic techniques and scientific principles, data scientists can discover valuable information relevant to virtually all aspects of business. That includes customer information for marketing campaigns, identifying and blocking cyber-attacks and fraud, managing equipment and managing financial risks.
These roles require an array of skill sets, often drawing people from mathematics and statistics backgrounds. But good software skills in data science roles are essential to success, especially with the range of software systems commonly used. Gaining these skills can help if you wish to advance your career in this field.
Knowing how to write code is required for a data scientist. Python is the most common computer language in data science, but it is not the only one, with R, Java, Scala, Octave and Clojure also common. Good coding skills help with flexible data transformation to speed up workflow and give greater control over the data, so time spent practicing and learning to code is well spent.
Increasingly tools are being used in data science, replacing the more traditional manual calculations. When using these tools and machine learning libraries, a knowledge of coding is necessary. For some businesses, an understanding of coding is considered so essential that if their data analyst does not have adequate coding skills, they will pair him or her up with a coder to streamline the work for maximum efficiency.
A key task for those working in data science is preparing the data for processing. It is, therefore, essential for the data scientist to know how to effectively process the information. Database management software usually includes a group of programs that can be used to manipulate, edit or index the data. In large systems it might be a multi-user system, providing access to the data in parallel. Data scientists need to be confident in using common database management software including SQL Server, Oracle and IBM DB2.
Windows Azure, IBM Cloud, Google Cloud and Amazon Web services are popular cloud platforms for use in data science. These platforms provide access to operational tools, frameworks, programming languages and databases that help data scientists manage the vast amount of data. Using cloud computing, data scientists can carry out tasks including data acquisition, data mining, testing predictive models and tune data variables. Understanding the concept of cloud computing is one of the key computing skills a good data scientist will need.
Machine Learning uses a type of artificial intelligence (AI), which looks at the use of data and algorithms to mimic how people learn. Machine learning and AI are skills that are particularly in demand in companies that uses a data-centric approach to decision making. Areas of data science that are likely to use machine learning include airline route planning, healthcare, fraud detection and facial and voice recognition systems, so anyone considering a career in these areas should consider developing their understanding of machine learning.
The amount of data produced on a daily basis is phenomenal thanks to the internet and social media, with recent statistics revealing we create 2.5 quintillion bytes of data each day. This data is high in volume, velocity and veracity, something that is known as the 3vs of Big Data. To store, use and manage all this data effectively, many companies are turning to Big Data Technology, such as Spark, Hadoop and Apache Storm. Proficiency in these technologies is fast becoming essential to prevent organizations becoming overwhelmed with the sheer amount of data they need to manage.
Gaining the skills
For those wanting to start or advance in a data science career, there is training available to fill any gaps in your skillset. If there is just one area, such as coding that you feel uncertain about, a course to target this specific area can help you gain skills valuable for employment. Building software skills in data science disciplines can be done a la carte or more comprehensively. If you feel you have several areas you need to build on or are completely new to the career, a more comprehensive course in data science would be a worthwhile move to provide you with all the skills and knowledge you need.
Many universities offer degrees in data science and these are proving popular with students. However, it is not always practical with other work or family commitments to attend such an establishment full time, or you may find there is not a suitable course close enough for you to attend.
Fortunately you also have the option of studying data science online. For those who already hold a bachelor of science degree, a good option is an Online Master’s in Data Science at Worcester Polytechnic Institute. Providing a high level of support, core course elements covering all aspects of data science and a choice of specializations in either AI & Machine Learning or Big Data Analytics, this course provides a thorough grounding to prepare you for a data science career.
It is not always necessary to get a Backelor’s or Master’s degree in order to start and entry level career in data science. Online courses and bootcamps can provide a great start for people not yet ready to pay for a college degree. A data science course can also let you know if this kind of work is enjoyable and meant for you.
Good data science graduates are likely to be in high demand in this growing field and can command lucrative salaries. Careers that data scientists opt for include data analyst, data scientist, database administrator, business analyst, data architect and data engineer.
What other skills are required?
It can sometimes seem that a data scientist needs to be a jack of all trades and a master of them all. Along with good software, mathematical, and statistical skills, data scientists require good communication, structured thinking, and storytelling skills, along with the ability to carry out data visualization, including proficiency in python for data science. Thus, software skills in data science fields may be only one aspect of what you need to have a successful career.
Above all a data scientist needs to be endlessly curious, always keen to see what the data reveals or what might happen if the data is tweaked in any way. This desire for new learning will serve them well, as data science is an ever developing discipline, with new tools and software emerging all the time.
|
OPCFW_CODE
|
Why does F# using Xunit require type information when Asserting equality on Strings
I'm using F# and Xunit. (I'm relatively new to both)
I've found that when I use Xunit's Assert.Equal() I need to specify "<string>" when the types being compared are string.
For example this run's and compiles:
[<Fact>]
let Test_XunitStringAssertion() =
let s1 = "Stuff"
Assert.Equal<string>("Stuff",s1)
My question is, why can't I remove "<string>" and just assert "Assert.Equal("Stuff",s1)" instead?
It looks to me like the compiler knows the types of both arguments, so why the fuss?
Here are the errors returned when compiling Assert.Equal("Stuff",s1):
error FS0041: A unique overload for method 'Equal' could not be determined based on type information prior to this program point. The available overloads are shown below (or in the Error List window). A type annotation may be needed.
error FS0041: Possible overload: 'Assert.Equal<'T>(expected: 'T, actual: 'T) : unit'.
error FS0041: Possible overload: 'Assert.Equal<'T>(expected: seq<'T>, actual: seq<'T>) : unit'.
error FS0041: Possible overload: 'Assert.Equal<'T>(expected: 'T, actual: 'T, comparer: System.Collections.Generic.IEqualityComparer<'T>) : unit'.
error FS0041: Possible overload: 'Assert.Equal(expected: float, actual: float, precision: int) : unit'.
error FS0041: Possible overload: 'Assert.Equal(expected: decimal, actual: decimal, precision: int) : unit'.
error FS0041: Possible overload: 'Assert.Equal<'T>(expected: seq<'T>, actual: seq<'T>, comparer: System.Collections.Generic.IEqualityComparer<'T>) : unit'.
related: http://stackoverflow.com/questions/5667372/what-unit-testing-frameworks-are-available-for-f/5669263#5669263 (i.e. idiomatic F# assertions that are dramatically underpublicised)
That's because string can be matched by both the first and second overloads (remember: string :> seq<char>).
Your example with <string> removed runs without error for me as I'd expect (although string :> seq<char> as @Ramon Snir points out, the overload resolution algorithm resolves the ambiguity by recognizing that the supplied string types are "closer" to string than seq<char>).
[<Fact>]
let Test_XunitStringAssertion() =
let s1 = "Stuff"
Assert.Equal("Stuff",s1)
I guess the sample you provided is not exactly the same as the real code which is causing you problems. Maybe s1 in your real code is not actually a string (or at least the compiler doesn't know it is).
|
STACK_EXCHANGE
|
I currently sit in Microsoft IT and get to share experiences and learn with some of the most interesting folks in the world. My role encourages me to generate good ideas for the company and work with others through natural alignment to build great products for Microsoft. This is good fun.
I’m recently noticing more and more that the soft skills that I’ve picked up during my career are invaluable to be successful. And for this reason, I’ve personally begun a journey to strengthen my leadership abilities. One method I use is to look introspectively at myself to discover strenghts to leverage and weaknesses to manage. As a byproduct, I also do a bit of analysis of others around me to find their strengths to leverage and weaknesses to manage. This is what has brought me to this blog entry on the need for well-rounded skills.
My consulting experience: When I was an IT consultant playing roles such as Sollution Architect, Devleopment Lead and Program Lead I was measured on delivering IT solutions to customers that were of highly successful (ie on-time, on-budget, meeting customers satisfaction and of high system quality). In that environment, I learned that it was essential to pick up skills to survive because I was surrounded with folk that had brilliant customer-facing, fast-learning, quality-focused, and most of all strong professionalism skills. You had to have these skills to be successful.
My sales experience: When I was seconded to a sales team for a while as a Enterprise Solution Architect playing roles such as pre-sales technical architect, pre-sales strategist and partner strategy consultant I was measured on driving programs which lead to increased revenue and partner satisfaction. In that environment, I was surrounded with fast-talkers, extremely confident and capable sales resources – I’m not talking the sort that do more adminstrative sales but those few, rare breeds that have mastered the art of executive sales for large enterprise customers.
My IT experience: Over my cumulative IT experience, I have played roles such as Developer, Developer Lead, Test Lead, Program Manager, Solution Architect and Enterprise Architect. All of these roles generally are about delivering solutions to the business. I’ve been surrounded with individuals who have extensive engineering knowledge and great solution delivery skills. These are essential to be successful delivering IT solutions to the business.
Ok, now the interesting part of this blog. Having a varied background in roles ranging from sales to consulting to IT have really helped me make greater impact. Skills that were a prerequisite for success in one environment are a bonus in another. For example, in IT, skills that I developed while in sales roles that helped me build trust with partners becomes a bonus for learning how to build trust with like-minded groups in IT. And as a result, together we achieve greater impact that is relatively unusual. We acheive a sort of 1+1=3 situation. This is not all that common in IT shops but is normal in sales and consulting.
Let me dive a bit deeper to explain. The stereotypical sales role will require to peice together bits and bobs of products a and b, then peice together a partnership with hardwar vendor x, software vendor y and delivery partner z all within a matter of days. The skills necessary to do this are either there are aren’t and if they aren’t, you are not as successful a salesperson or presales consultant as one could.
The stereotypical IT roles will require to think long and hard about a software system and then snap to a relatively well-defined team model in a well-defined process model as part of a software development lifecycle for example.
Therefore, one might argue that there are skills developed in a sales environment that could prove useful to other environments such as IT. As products of our environment, we naturally develop the survival skills our environment requires. If we have a relatively well-rounded experience, we have relatively more skills to bring to the table that will allow us to make bigger impact.
Perhaps, if we made deliberate movements to be in positions of different environments and while there carefully nurtured and honed to excellence the necessary skills to survive in those environments we would be all that more effective. I think that this is an interesting opportunity for all of us.
|
OPCFW_CODE
|
During these christmasdays one of my first goals is to clean up my life. To become free to do whatever is needed to be done.
Over the last years I was never in the situation to be free to do what I really liked nor what had to be done. I was never really free just to love and had time to love.
Some days ago I had a deep experience.
I got up and just wanted to make a routine check of my internet-server. In one of the logfiles I found lots of errors.
I set up and administrated a mail-server for one of my friend's office. In that mail-server I left a long deleted email-alias for the system-users.That caused a double bounce of mails pingponging between his and my server. Ok, first I found the leftover, wrong email-address and put in the correct one - still an address which points to my own account. In that moment the thought poped up: "What the fuxx has MY email-address to do in HIS server". Ok, there was still the problem which caused the flood of mails. It found out that the virusscanner was broken. That caused that the emails where scanned improperly, resulting in temporary files left over in the scanner directory. Just at this moment a huge 500mb mail was delivered to the server. After several failed tries the free space on the harddisk was gone, and every mail got rejected. The whole mailsystem was down. The main purpose of that server down, and my main duty was to bring it up again. It took me three hours of stressful working and configuring to get the server running again. Without virusscanner, but running. The last several minutes - since long it was time for breakfast - Hans was waiting for me to finish. Finally I got it and made me ready to leave my room.
While preparing and leaving the room my energy level was dramatically shifting, my joy to live and love that day, was rising up instantly.
At that point it was absolutely clear that I made a mistake. I made the mistake to install the mail-server in my friend's office, and offered to administrate it. It bound me to the duty to provide a well performing server. And I brought my friend to the belif that he can have a nice mail-server without own knowledge or work.
So it was time to make him a Christmas present:
My and his freedom - I released the mail-server into his hands.
On my interner-server I hosted half a dozen of other domains. Some of them for free, others with minimal fee.
Inspired from the above mentioned experience, I decided to quit hosting service.
I prepared for every domain some statistics to help the owner to choose the right host for replacement. All in all it took me about 6hrs to complete the writing of the mails to the hosted parties.
Alone announcing that I quit the service took 6(six) hours!!!
Looking back the years, I was far more busy to administrate MY OWN server for the service of OTHERS, than I had time and energy to do the work for my own website.
By the end of the year most of the domains will be either completely offline by their owners or moved to other servers.
Finally I gain the freedom to do my own, and only my own business!
That brought me to the next step:
Helping others to administrate their own server -> YES!
Running their server -> NO, NEVER EVER AGAIN!
blissful Christmasdays for everyone
|
OPCFW_CODE
|
Why 'pale' yellow instead of 'light' yellow and what are the other colors used with 'pale'?
In LDOCE, 'light' is considered a synonym of 'pale' which means having more white in it than usual, and I also thought pale and light are interchangeable when it came to colors.
I'm reading 'English Vocabulary in Use', intermediate level by Cambridge University Press which is based on British English, in which there is a note that says,
Note: With some colours, we use pale, not light, e.g. pale yellow.
Firstly, Is it something exclusive to British English?
And secondly, what are the other colours that you refer to as 'pale' and not 'light'.
I took a picture of the page in the book. Just click on the picture. You can find the Note on the left.
I found this on Google, Pale Brown VS Light Brown
Technically "pale" refers to the saturation of the color, and "light/dark" refers to luminance, or the perceived brightness.
In AmE usage however, light can also mean a color that is not intense. I can't think of an instance where pale could be used for a color that is intense but light (or bright).
As I mentioned in my comment, in general conversation, you can use pale or light interchangeably when referring to a color and be understood. If the register is more formal and you're writing for a UK English audience, you should follow the advice in your book just to be certain your phrasing won't seem odd.
For each of the examples below, I went to DuckDuckGo.com and searched for images that matched the term. I had success with each color except for pale dark green - I ended up searching for "pale dark green" fabric to find an image where the color filled the frame. I picked from the first few results the ones I felt were distinct enough to show the difference. There is not a definite line where we can say "this green is pale to everyone who looks at it". Click on the image to see the original sized image.
This is both a light green and a pale green:
This is a light green but not a pale green:
This is bright green (both light in luminance and intense in color):
This is a pale dark green (might also be called gray-green):
This is a dark green (not pale This color is often called emerald or emerald green):
@Colleen V: So we can use light with yellow or pink. Should I ignore the note in the book? You disagree with the notion it has put, right?
@Azad I think like everything in English, it depends on the context. If you're speaking informally, I think you would be understood if you said "pale yellow" or "light yellow" and you wouldn't sound odd (at least in the US). If you are writing formally for a British audience, I would follow the book's advice.
I see, so it's mainly the matter of register and locality. Thanks
I emphatically agree that "pale" is a term of saturation and "light" is a term of luminance. As demonstrated in this brilliant answer, a color can be light and pale or light and not pale. I would argue further a color can be dark, yet pale, such as a desaturated brown.
I agree with all of these except "pale dark," which sounds weird to me.
What about dark pale green? Does the word order make a difference? It sounded a bit odd to me too but there's nothing wrong with a pale dark something as far as I can tell. @sumelic
The word order doesn't make a difference. It's the meaning that doesn't fit. For me, something that is "pale" has to also be "light." This could be relative to something else, but it excludes the concept of "dark."
@sumelic I can understand that - what would you call the pale dark green color?
I think maybe gray-green or grayish green, as you've said. Or maybe "muddy green."
I think this is related to Why are there no dark yellows, or bright violets?, on Photography Stack Exchange. In a very real sense, all yellow is light, so saying "light yellow" can feel a bit redundant.
Speaking technically, those colours are not the best examples. #1 and #5 are called Pale but their saturations = 230 & 234, both high numbers. #2 called Not-pale has sat. = 171, moderate. This is a direct contradiction of the opening statement. Human vision says "#1 & #5 are not vivid" and "#2 is vivid" disagreeing with technical descriptons of saturation because technically saturation is measured at constant lightness but those range from lightness 30 to 201. In this case, invoking a technical specification is extremely unhelpful as it contradicts common understanding of the words.
@Smartybartfast I'm not understanding. I wasn't trying to give a technical definition, but rather explain common usage (I just mentioned it because common usage tends to diverge from the technical definition). I searched for images that people had described by the terms I labelled them with - those colors don't represent technical "truth" but rather the most relevant hits with that description from a search engine. 'Vivid' is a whole different question :P
The answer starts 'Technically "pale" refers to the saturation...' which invites commentary on how the technical and common understandings differ. The question is about colour saturation so we will disagree on vivid as being a "whole different question" as in my part of the world (see how I spell "colour" unlike you poor colonials who can't afford the extra "u" :) ) vivid = "very high saturation or purity; produced by a pure or almost pure colouring agent" (thanks dictionary.com)
According to Google NGram, pale rather than light is the preferred term for most of the colours that I tried. The difference is much greater for yellow than other colours, and for BrE than AmE. Cream and pink are very much more common than pale yellow and pale red, though cream may include the dairy product as well.
The only colours I have found where light was the preferred term were brown (AmE and BrE) and grey/gray (AmE only).
For yellow, which is perceived as a bright colour, pale is the preferred term, and for brown, which is perceived as a dark colour, light is the preferred term.
Pale in AmE is more "literary" and light is more colloquial (unless you're speaking with a designer). Did you limit the search to the British corpra?
An NGram comparing pale/light yellow as a noun in the US and GB corpora between 1900 and 2000 shows that pale occurs more often in both, but a much more often in GB than US (determined by a quick and dirty subtraction) pale yellow_NOUN:eng_us_2012,pale yellow_NOUN:eng_gb_2012,light yellow_NOUN:eng_us_2012,light yellow_NOUN:eng_gb_2012 Substituting other colors for yellow is interesting. Try pale blue versus light blue blue_NOUN:eng_us_2012,pale blue_NOUN:eng_gb_2012,light blue_NOUN:eng_us_2012,light blue_NOUN:eng_gb_2012
@colleenV: interesting. Do Americans spell yellow differently, or something? I didn't know that you could specify parts of speech and corpora in a search: thanks for that!
I'm a little on the fence about specify the color as a noun though. I think you can do pale yellow _NOUN_ as well. That might tell us if there is an adjective/noun difference.
Also, technically, there is no such thing as a pale white, grey, or black because they have no hue. I have seen "pale white" but I think it is actually a different sense of pale than "pale yellow". I'd have to think about it some more.
@ColleenV: relative usage between noun and adjective seems consistent. TBH, I only really considered looking at adjectives. Technically, yes... but technical usage tends to be much more precise than laymen's usage. Not necessarily because laymen don't understand the finer points of the meanings- often technical people hijack existing words and add their own precise meaning. FYI, pale gray/grey > pale white >> pale black.
I agree - technical usage isn't "more correct". It's more important to be precise in some situations than in others. 'I saw a rider on a pale horse' potentially conveys so much more meaning than saying 'I saw man riding a white horse.', but the more precise version would be better if you were explaining what happened to the police.
I'm not completely certain this applies to all the scenarios, but to what it's worth, Pale doesn't necessarily mean addition of white element pigments to the colors. Pale means:
Lacking color or intensity.
You might want to visit this link to see the difference between the two words.
Also, even though they are synonymous, there are many places where you cannot use them interchangeability. For example,
Look at her pale skin.
You don't say "light skin".
Additionally, consider the following sentence:
"Are you okay? You look awfully pale."
This 'pale' refers to the commonly used idiom "turning white in fear".
Yes, Varun KN thanks for your comment. Though I just don't understand why 'pale' can be used with some colors instead of 'light'. It seems it has something to do with collocation that I can't find a list for.
Actually let me edit the title to make my point clearer. Sorry for being ambiguous in first place.
You may say "light skin" as well as pale skin. https://en.wikipedia.org/wiki/Light_skin
but 'light skin' is kind of permanent feature, while 'you look pale' is more like temporary effect of you being exhausted or sth.
@Yabko I'm not sure about BrE, but in AmE "pale skin" can be a permanent feature, but it describes almost white/colorless skin, or skin that lacks melanin. Light skin seems to be relative to dark skin and could refer to skin that has some color, so it would depend on the context how close to pale skin it is.
|
STACK_EXCHANGE
|
Gated3D: Monocular 3D Object Detection From Temporal Illumination Cues
We propose a novel 3D object detection method, "Gated3D", which uses a flood-illuminated gated camera. The high resolution of gated images enables semantic understanding at long ranges. In the figure, our gated slices are color-coded with red for slice 1, green for slice 2 and blue for slice 3. We evaluate Gated3D on real data collected with a Velodyne HDL64-S3D scanning lidar as reference, as seen in the overlay on the right.
Today's state-of-the-art methods for 3D object detection are based on lidar, stereo, or monocular cameras. Lidar-based methods achieve the best accuracy, but have a large footprint, high cost, and mechanically-limited angular sampling rates, resulting in low spatial resolution at long ranges. Recent approaches using low-cost monocular or stereo cameras promise to overcome these limitations but struggle in low-light or low-contrast regions as they rely on passive CMOS sensors. In this work, we propose a novel 3D object detection modality that exploits temporal illumination cues from a low-cost monocular gated imager. We introduce a novel deep detection architecture, Gated3D, that is tailored to temporal illumination cues in gated images. This modality allows us to exploit mature 2D object feature extractors that guide the 3D predictions through a frustum segment estimation. We assess the proposed method experimentally on a 3D detection dataset that includes gated images captured over 10,000 km of driving data. We validate that our method outperforms state-of-the-art monocular and stereo methods.
To detect objects and predict their 3D location, dimension and orientation our proposed network requires three gated slices with overlapping illumination fields. Therefore our network first employs a 2D detection network to detect ROIs. The resulting 2D boxes are used to crop regions from both the backbone network and input gated slices. Secondly, we apply a dedicated 3D network, which estimates object 3D location by using a frustum segment computed from the 2D boxes and 3D statistics of the training data. The network processes the gated slices separately, then fuses the resulting features with the backbone features and estimates the 3D bounding box parameters.
Estimation of object distance
We are able to estimate the object distance from the viewing frustum and the object height, its projected height and the vertical focal length. This is beneficial as there is an infinite number of 3D cuboids that can project to a given bounding box. In detail we assume that the object height h follows a gaussian distribution and use it to constrain the frustum depth d. For more information please see the publication.
To record the dataset used for training and evaluating the proposed method we used the research vehicle on the left hand side. The dataset also includes corresponding lidar point clouds and stereo image pairs, see right table for sensor parameters. The stereo camera is located at approximately the same position of the gated camera in order to ensure a similar viewpoint.
Qualitative comparisons on the test dataset. Bounding boxes from the proposed method are tighter and more accurate than the baseline methods. This is seen in the second image with the other methods showing large errors in pedestrian bounding box heights. The BEV lidar overlays show our method offers more accurate depth and orientation than the baselines. For example, the car in the intersection of the fourth image has a 90 degree orientation error in the pseudo-lidar and stereo baselines, and is missed in the monocular baseline. The advantages of our method are most noticeable for pedestrians, as cars are easier for other methods due to being large and specular (please zoom in for details).
Supplementary Videos - Long Test Drives
Tobias Gruber, Frank D. Julca-Aguilar, Mario Bijelic, Werner Ritter, Klaus Dietmayer, and Felix Heide. Gated2depth: Real-time dense lidar from gated images. The IEEE International Conference on Computer Vision, 2019.
Bijelic, Mario and Gruber, Tobias and Mannan, Fahim and Kraus, Florian and Ritter, Werner and Dietmayer, Klaus and Heide, Felix. Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
|
OPCFW_CODE
|
#include "Physics/WASDPhysicsControllerSystem.h"
#include "Physics/WASDPhysicsControllerComponent.h"
#include "Physics/CharacterControllerComponent.h"
#include "Physics/PhysicsUtils.h"
#include <Engine/Engine.h>
#include <Input/Keyboard.h>
#include <DeviceManager/DeviceManager.h>
#include <Scene3D/Entity.h>
#include <Scene3D/Components/CameraComponent.h>
#include <Render/Highlevel/Camera.h>
namespace DAVA
{
WASDPhysicsControllerSystem::WASDPhysicsControllerSystem(Scene* scene)
: SceneSystem(scene)
{
}
void WASDPhysicsControllerSystem::RegisterEntity(Entity* e)
{
Component* controllerComponent = e->GetComponent(Type::Instance<WASDPhysicsControllerComponent>());
if (controllerComponent != nullptr)
{
RegisterComponent(e, controllerComponent);
}
}
void WASDPhysicsControllerSystem::UnregisterEntity(Entity* e)
{
Component* controllerComponent = e->GetComponent(Type::Instance<WASDPhysicsControllerComponent>());
if (controllerComponent != nullptr)
{
UnregisterComponent(e, controllerComponent);
}
}
void WASDPhysicsControllerSystem::RegisterComponent(Entity* e, Component* c)
{
if (c->GetType()->Is<WASDPhysicsControllerComponent>())
{
wasdComponents.insert(static_cast<WASDPhysicsControllerComponent*>(c));
}
}
void WASDPhysicsControllerSystem::UnregisterComponent(Entity* e, Component* c)
{
if (c->GetType()->Is<WASDPhysicsControllerComponent>())
{
wasdComponents.erase(static_cast<WASDPhysicsControllerComponent*>(c));
}
}
void WASDPhysicsControllerSystem::PrepareForRemove()
{
wasdComponents.clear();
}
void WASDPhysicsControllerSystem::Process(float32 timeElapsed)
{
if (wasdComponents.size() == 0)
{
return;
}
Keyboard* keyboard = GetEngineContext()->deviceManager->GetKeyboard();
if (keyboard == nullptr)
{
return;
}
for (WASDPhysicsControllerComponent* wasdComponent : wasdComponents)
{
DVASSERT(wasdComponent != nullptr);
Entity* entity = wasdComponent->GetEntity();
DVASSERT(entity != nullptr);
CharacterControllerComponent* characterControllerComponent = PhysicsUtils::GetCharacterControllerComponent(entity);
if (characterControllerComponent == nullptr)
{
continue;
}
Vector3 forward = Vector3::UnitX;
Vector3 right = -Vector3::UnitY;
// This system also sync entity's camera (if there is any) and uses it's direction for convenience
CameraComponent* cameraComponent = entity->GetComponent<CameraComponent>();
if (cameraComponent != nullptr)
{
forward = cameraComponent->GetCamera()->GetDirection();
right = -cameraComponent->GetCamera()->GetLeft();
const Vector3 direction = cameraComponent->GetCamera()->GetDirection();
cameraComponent->GetCamera()->SetPosition(entity->GetLocalTransform().GetTranslationVector());
cameraComponent->GetCamera()->SetDirection(direction);
}
forward *= moveSpeedCoeff;
right *= moveSpeedCoeff;
if (keyboard->GetKeyState(eInputElements::KB_LSHIFT).IsPressed())
{
forward *= 2.0f;
right *= 2.0f;
}
if (keyboard->GetKeyState(eInputElements::KB_W).IsPressed() || keyboard->GetKeyState(eInputElements::KB_UP).IsPressed())
{
characterControllerComponent->Move(forward);
}
if (keyboard->GetKeyState(eInputElements::KB_S).IsPressed() || keyboard->GetKeyState(eInputElements::KB_DOWN).IsPressed())
{
characterControllerComponent->Move(-forward);
}
if (keyboard->GetKeyState(eInputElements::KB_D).IsPressed() || keyboard->GetKeyState(eInputElements::KB_RIGHT).IsPressed())
{
characterControllerComponent->Move(right);
}
if (keyboard->GetKeyState(eInputElements::KB_A).IsPressed() || keyboard->GetKeyState(eInputElements::KB_LEFT).IsPressed())
{
characterControllerComponent->Move(-right);
}
}
}
}
|
STACK_EDU
|
Supernacularnovel Pocket Hunting Dimension online – Chapter 1233 – How Do They Have No Awareness? animal deafening to you-p3
Novel–Pocket Hunting Dimension–Pocket Hunting Dimension
Chapter 1233 – How Do They Have No Awareness? count frame
Chapter 1233: How Do They Have No Awareness?
Pocket Hunting Dimension
The women viewed him speechlessly.
Halfway, it dropped down and looked to dust particles.
Appropriate then, she noticed boundless icebergs merging using the wilderness.
Toaru Majutsu No Index SS: Biohacker
The heat range there was clearly receiving lower minimizing. Even Lu Li was emotion the nibble on the frosty.
Lu Li rejoiced. They had never been right here just before.
Other individuals can be worried to passing away when they observed this.
Lu Ze nodded.
Do the ice-cubes parrot overlord reside below?
If everybody reached amateur mastery, they would soon have the capacity to take into account struggling overlord managers.
Lu Ze smiled and patted her top of your head. “Where have you go just then?”
Is she the first one to realize its?
A thunderous explosion occurred in the wasteland.
Lu Ze smiled and patted her mind. “Where do you go just then?”
What went down?
He could only sigh then.
The climate there were getting decrease minimizing. Even Lu Li was experiencing the bite with the chilly.
Still it didn’t work…
The women considered him speechlessly.
She was very careful, but still, she died faster than Nangong Jing?
Lu Ze looked all around. The girls ended up not anymore there.
It appeared surreal.
Lu Li chosen it up and grinned.
history of the great american fortunes pdf
She was interested and flew towards there.
The primary bear turned into airborne dirt and dust.
Should really he go and also have a search?
Is she the first one to find it?
Qiuyue Hesha reported, “What are you currently speaking about? Hurry and actually eat. Ying Ying is going to drool.”
She even saw a number of snowfall-bright bears. They had been all 8-9 yards extra tall.
The more she moved, the reduced the heat range was.
Alice just got back from the outside. She flashed a grin at him. “Senior, you happen to be performed with farming! I just now concluded creating meals!”
Lu Ze especially received the Undying Struggle Objective Divine Skill. He pa.s.sed along this divine artwork to your ladies. However, their Conflict Motive G.o.d Craft hadn’t gotten to the appropriate measure of mastery nevertheless.
With his phenomena, the Human Race could be greatly strengthened.
Lu Ze slowly exposed his vision. He was delighted along with his improvement. He acquired just shattered to stage-6 cosmic cloud point out for a weeks time. Around this fee, he could access degree-7 cosmic cloud express in, at most, a month.
|
OPCFW_CODE
|
Prestantiousnovel Pocket Hunting Dimension – Chapter 981 – Series of Surprises silly string to you-p1
Boskernovel Pocket Hunting Dimension read – Chapter 981 – Series of Surprises dinner blood share-p1
Novel–Pocket Hunting Dimension–Pocket Hunting Dimension
Chapter 981 – Series of Surprises food reward
Lu Ze was quite keen.
Therefore, the alliance fleets made our minds up to go back to the federation and keep the information there. Only then have they think obtain.
ben comer life science leader
As a result, the alliance fleets made our minds up to return to the federation and keep the tools there. Only then managed they feel secure.
Zuoqiu Xunshuang arrived in and said, “We’re going to arrive at the edge, well, i arrived at tell you guys… small Ying Ying~~”. She still left Lu Ze behind and traveled to hug Ying
Eddie, Brenda, and also the cosmic method state governments on the Winged Race decided to go over.
When most of the fleets possessed obtained, they kept the border place together and gone into warp sizing soaring towards the federation.
The audience recognized once more. There were clearly a series of surprises today.
There was just two products crystals. It had been a breastplate and a pair of shoes or boots. He provided it to Lu Li and Alice.
The audience smiled. If that occured, their total power would enhance yet again.
Liu Zhiyun had a challenging appearance. Some time before, he considered he could be the primary prodigy to break right through to a cosmic method status during the Human being Competition.
Zuoqiu Xunshuang nodded with full satisfaction. “Oh, I realize.”
Martha nodded and smiled. “The remaining portion of the races aren’t in this article?”
Zuoqiu Xunshuang checked around curiously. “Where are Jing Jing as well as the other women?”.
Lu Ze responded, “They haven’t received up yet.”
Ying Ying: “…” Lu Ze: “…”
Then, the seniors started speaking about their following movements.
Ying Ying: “…” Lu Ze: “…”
Martha nodded and sat right down to put it off.
Then, the senior citizens set about talking over their subsequent techniques.
Martha nodded and smiled. “The entire events aren’t here?”
Nangong Jing seemed to be getting a pain as well.
Everyone rejoiced. This has been way too essential for a persons competition now.
He developed stronger however the effects of the crystal ended up declining with use.
This was still very good.
Zuoqiu Xunshuang nodded with fulfillment. “Oh, I realize.”
Martha nodded and smiled. “The remaining portion of the backrounds aren’t here?”
In at most per week, he could be smashing through a level-2 cosmic technique status.
Lu Ze sighed. “We found that difficult region and acquired six poison b.a.l.l.s and 16 world crystals.”
Lu Ze observed he only needed an additional to attempt utilizing a level-5 cosmic method point out reddish colored liquid. Lu Ze went away from the room and next their collection was knocked.
A lot more competitions came to the territory. The alliance has brought almost all of the sizeable resource details. If other cultures was aware how much resources they compiled, it may be difficult not to ever desire to are available following it.
Then, the seniors started going over their after that movements.
Zuoqiu Xunshuang nodded with fulfillment. “Oh, I understand.”
how many a records should i have
Additional races came to the territory. The alliance has taken the majority of the large useful resource points. If other cultures realized simply how much resources they obtained, it will be challenging not to ever wish to and are avalable following it.
Every person rejoiced. This became too very important to a persons race today.
Lu Ze replied, “They haven’t received up yet.”
|
OPCFW_CODE
|
This article introduces how to convert MBR to GPT in Windows Server 2008 R2 without losing data. Change disk from MBR to GPT with mbr2gpt.exe and NIUBI.
Many Windows 2008 servers have kept running many years, the storage device needs to be replaced no matter you use physical disk or RAID array. The most common issue after upgrading disk is that you cannot use full disk space. For example, on a 4TB disk you can only use 2TB, the remaining space can't be created new or extended to other volume in Disk Management. In this case, you need to change disk from MBR to GPT.
On MBR style disk, you can create maximum 4 Primary partitions, if you want to create more, you also need to convert MBR disk to GPT for Windows 2008 Server.
Cannot convert to GPT in Server 2008 Disk Management
Windows Server 2008 native Disk Management has the ability to convert disk between MBR and GPT, but there must be no partitions in this disk, otherwise these options are grayed out. As you see in my test server, there are drive F: and H: on Disk 1.
Obviously, the built-in converting option is used for new disk, or there is another large disk to transfer files before deleting existing partitions.
You cannot convert system disk from MBR to GPT in Server 2008 Disk Management, because you can't delete system partition in Windows.
1-click method to convert MBR to GPT
If you want to convert MBR disk to GPT without Operating System, it is very easy and fast with NIUBI Partition Editor.
Download NIUBI and you'll see all disk partitions with structure and other information on the right.
Watch the video how to convert MBR to GPT disk in Windows Server 2008:
How to convert system disk from MBR to GPT
It is much more complicated while converting MBR system disk to GPT in Windows 2008 server, because the boot strategy is different, in addition, your hardware must support booting from UEFI.
To convert system disk, it is suggested to convert with MBR2GPT.exe, which is provided by Microsoft. MBR2GPT works via command prompt, there is no such command included in Windows Server 2008 (and R2), therefore, you need to download from Microsoft.
Steps to convert MBR to GPT with mbr2gpt.exe in Windows Server 2008 (R2):
Step 1: Download Windows 10 setup tool from
https://www.microsoft.com/en-us/software-download/windows10 and select the second option to create installation media with it.
To VMware/Hyper-V virtual server, you may create ISO file. To physical servers, you need to create bootable DVD or USB drive.
Step 2: Restart server and boot from the ISO, DVD or USB flash drive, when it ask you to "Install now", do NOT click it, click "Repair your computer" on bottom left instead. Then click Troubleshoot > Command Prompt in next windows.
Step 3: You just need 2 commands to convert MBR system disk to GPT:
- Type cd.. and press Enter.
- Type mbr2gpt /convert and press Enter.
Watch the video how to convert MBR disk to GPT with MBR2GPT command in Windows Server 2008:
It is easy to convert with mbr2gpt command prompt, but it costs time to download Windows 10 setup tool and create bootable media.
Precondition of Server 2008 mbr2gpt command prompt
If your disk partition configuration doesn't meet the requirements of MBR2GPT, you'll receive error such as - "Validating layout, disk sector size is: 512 bytes Disk layout validating failed for disk 0", "MBR2GPT: Conversion failed".
Before any change to the disk is made, MBR2GPT validates the layout and geometry of the selected disk to ensure that:
- The disk is currently using MBR.
- There are at most 3 Primary partitions in the MBR partition table.
- One of the partitions is set as active and is the system partition.
- The disk does not have any Extended/Logical partition.
- The BCD store on the system partition contains a default OS entry pointing to an OS partition.
- The volume IDs can be retrieved for each volume which has a drive letter assigned.
- All partitions on the disk are of MBR types recognized by Windows or has a mapping specified using the /map command-line option.
If any of these checks fails, the conversion will not proceed and an error will be returned. (The disk won't be converted or modified.) In the list you should pay attention to number 2, 3 and 7.
Check disk partition layout in Disk Management
Press Windows and R together on the keyboard, type diskmgmt.msc and press Enter. Then you'll see disk partitions with structure and other information in Server 2008 Disk Management.
1. Check partition status:
From this screenshot, drive D: is Logical partition. In this situation, you need to convert D to Primary partition with NIUBI.
Note: all partitions should be Primary.
2. Check number of partitions:
From this screenshot, there are 4 Primary partitions in Disk 0. In this situation, you need to move a partition to other disk with NIUBI.
Note: there should be maximum 3 Primary partitions in a disk.
3. Check partition type:
In Windows Server 2008, the common partition type is NTFS and FAT32, if there are other types of partitions such as EXT2/3 in this MBR disk, it cannot be converted. Check the supported partitions.
It is a bit complicated to convert system MBR disk to GPT, but mbr2gpt is the safest tool. If the disk has no Operating System, it is very easy to convert MBR to GPT in Windows Server 2008 R2 with NIUBI Partition Editor. Besides converting disk MBR to GPT, it helps convert partition NTFS to FAT32, convert between Primary and Logical. It also helps copy, shrink, extend, merge, move, defrag, wipe, hide partition, etc.
|
OPCFW_CODE
|
Helical Insight provides a user the flexibility to apply database functions on different data formats like Text, numeric, date, time, date-time, etc. Following are the steps to apply database functions. In the below blog we have shown the steps to apply a single function. But you can also apply multiple nested functions also like outside convert to uppercase, within that another which is extract month name, etc. So multiple nested functions of different types can also be applied one within another.
Step 1: After getting the columns in the selection area, click on the column and then click on “Function”
Step 2: Then a window will open. You can either directly start typing the kind of function you want to use and with autosuggest select the function.
Or from the bottom portion you can search the various functions for various datatypes and double click on it and that will get added into the list. As mentioned earlier also first click on “Data Type” and then select any of the “Functions”
Step 3: Once the function to be applied is selected, then on which column we want to apply and if any other parameter is to be passed that can be specified
You can either directly start typing the name of column you want to use and with autosuggest select the column.
Or from the bottom portion “Fields” section you can search the column name and double click on it and that will get added into the list.
Note: The functions in the list vary for different databases.
Clicking on the function name also provides a description on how to use it.
Step 4: Click on generate the report.
In a similar way you can apply nested functions within each other for more complex data manipulation.
Note: Being a very developer friendly open source BI product, Helical Insight always allows you to add your own custom function in this list. Refer to this blog to learn more : https://www.helicalinsight.com/adding-a-database-function/ The process is the same for older as well as newer version.
Please note that based on DB functions being applied, datatype of a column can change from numeric to text, or text to numeric etc based on the data being sent back from the DB and accordingly that will affect how Helical Insight application which will change it how application uses it.
Column ‘mode_of_payment’ is a text field and is used as dimension as shown by blue color.
Now, instead of grouping them up we add an aggregation of ‘count’, it will change it to a numeric information which is considered as measure by the application shown by green color.
TIP: You can see that all dimensions are shown as blue fields and all measure as green fields. This makes it clear as what is the current state that application is taking them.
If you want to use count of ‘mode_of_payment’ as dimension then in the dropdown of this field choose ‘Discrete’. This will again change it back to dimension. Refer this link to learn more of how to use measure as a dimension.
Read more about dimension and measure and interchanging it.
How to delete an applied database function in helical insight?
Once you have created a report using the applied database function there might be a need to delete the applied DB function or change those values. In this blog, we will learn how we can go about doing the same.
Below is the snapshot of the report which is created by applying the Database function on one of the columns. From the date column, we have extracted the Year and done an analysis of the Year-wise cost of travel.
Now let us say we want to remove or make modifications in this, again click on that column and go to “Function”.
Click on “Function” and the below screen will appear which will show the details of the function which has been applied on the current screen. You can make certain kinds of changes directly here as well.
Select and delete the function which has been applied. You can then apply any other function etc. Once done click on “Save”.
|
OPCFW_CODE
|
IBIS-Q System Documentation - FAQ
Overview - FAQThis document provides answers to the frequently asked questions (FAQ) and Troubleshooting tips for the IBIS-PH Query System's module development. The first section lists the FAQs, while the last section gives a general decision tree that can be used to help locate and diagnose the problem.
This section discusses some of the frequently asked questions on module development.
|What do I name my file?||The file name can be whatever you choose as long as it menu button points to your file name for the selection file and the selection file points to you module file.|
|What sub directory/folder should I put my file in?||For the selection and module files they should go in the folder for that module located in the tomcat559\webapps\ibisph-view-2\xml\query\module folder for your localhost.|
|How do I test my just created module? How do I check that my module is the correct structure and well formed?||Refer to the Testing and Troubleshooting documents for steps and helps in testing modules.|
|How do I remove a non related step, selections, or group by dimension choice for a measure?||Steps, selections and dimensions can be excluded from specific measures by using the CRITERIA, EXCLUSIONS tags within the CONFIGURATION tag for that measure. See the Query Module Example file, lines 75-85.|
|I'm done with my module now what am I supposed to do with it?||Follow the publishing procedures for your organization. In general, completed modules should be tested and checked into the repository. Files in the repository will be used for the next deployment.|
Troubleshooting for Localhost TestingThis section discusses the process of determining where the problem is. This assumes you have a working PC with Internet Explorer 6.0+ or Netscape Navigator 6.0+ also properly working.
Can you bring up your module page on your PC?
After you have started Tomcat use the URL http://localhost/ibisph-view-2/query/selection/ (module folder)/(module selection file name).html
Example URLs: http://localhost/ibisph-view-2/query/selection/pop/PopSelection.html http://localhost:8080/ibisph-view-2/query/selection/pop/PopSelection.html
Check to make sure Tomcat is running. Try http://localhost/
- Page not found.
Tomcat and/or the ibisph-view system has a problem. Call the system administrator.
- Home page displayed.
Does your module name match exactly your module's filename which should be in the tomcat/webaspps/ibisph-view/xml/query/module directory?
Does your measure name exactly match a measure name contained within your module xml file? This value should be in one of the /QUERY/MODULE/MEASURES/MEASURE/NAME elements.
Is your document well formed? (Refer to the Testing document for further instructions.)
Make it well formed and test again.
Fix the name and test again
- Is tomcat throwing an exception (bunch of text displayed in the Tomcat output window). Look at the first few lines of the exception error and sometimes this gives line number of offending XML code.
Was Tomcat exception helpful?
Fix and try again.
Take a screen shot of the exception and email/call system administrator
Start the Tomcat service and try again.
Run a query. Does it return data?
|
OPCFW_CODE
|
We’re excited to announce that you can now embed entire Power BI organizational apps in Microsoft Teams tabs. Until now, this has been one of the top feature requests for Power BI integration with Microsoft Teams. It helps teams and organization put the full Power BI org app experiences directly where people work every day. By adding Org apps in channels and meetings, you enable everyone to access the data they need.
We’ve started to roll out the experience to commercial cloud customers and expect it to be available for everyone in the next week or two.
Let’s look at the new experience and its benefits.
Picking an entire Power BI organizational app to embed in Microsoft Teams
To embed a Power BI organizational app, add a Power BI tab to a channel or meeting.
Go to the Apps pivot in the selection you and notice the new checkbox next to the app name.
When you check the box, you’re choosing to embed the entire organizational app in the Power BI tab. If you don’t see an app in the list, go to the Power BI app in Teams and install an app first or create one from a workspace you own. If you expand the app, you can continue to embed a specific report or scorecard from within an app.
When you pick an app, the app permissions should be set correctly for the team or meeting you’re embedding it into. It’s best to ask the app owner to update the app permissions for you. But don’t worry, end users without permissions can request access from withing the Power BI tab in Microsoft Teams.
Embedding an entire Power BI organizational app
End users of the app will be greeted with the familiar Power BI organizational app user experience.
The app left navigation is shown as it is in the Power BI service. Users can navigate within the app. Reports, Scorecards, Excel workbooks, and even Dashboards (yes Dashboards!) open natively within the embedded view.
Without leaving the channel you can expand the tab and even discuss in the channel or chat with colleagues.
Users need permission to use the app in Power BI, so work with the app owner to ensure everyone has the access they need. End users can request access as well. The new audiences experience for Power BI organizational apps is fully supported in the Teams tab.
If an end user doesn’t have the app installed, it will be installed for them when they open the app in the tab, assuming they have permission to the app.
We tried to keep users in Teams as much as possible when using org apps in the tab. However, some items will open in a browser window because they won’t work when embedded in Teams Desktop or in Teams for the web. For common actions like viewing drill-through reports and dashboard tiles we try open them in place and provide a back experience to navigate back to the original item.
Here are a few of the cases you might encounter that open in a browser window:
- App navigation items that use a link set to open in “content area” or “current tab” open in a new browser window
- Custom links in reports or dashboard tiles open in a new browser window
Why embedding Power BI organizational apps matters in Microsoft Teams
Every organization looks to efficiently deliver data to end users. As team members change, as meeting invites get forwarded to the right folks, or as new content is updated, it’s critical to ensure everyone can quickly access the data they need.
Power BI organizational apps have three important qualities that help teams and groups work effectively together:
- You can share an organizational app with an Office 365 group so that all Team members (and guests if you allow that) can access all the app content. This streamlines permissions management.
- You can bring all the related content into a single navigation that you customize to your team’s needs. You can name items and define audiences to target content to specific users or job roles.
- You can streamline content discovery for more of your end users because the app is branded by an icon and name, so users can more quickly find the data they need. As you add more reports, end users can easily find and discover them because they’re part of the same organizational apps they already use.
As you consider how best to leverage organizational apps in Microsoft Teams, we’d encourage you to give app end users the ability to build new reports connected to the app datasets (grant the build permission). This enables end users to connect to the app data in Excel or create new reports using the same trustworthy app data. These customized and refreshable workbooks and reports speed data culture by helping more of your workforce find and share insights more quickly.
More to come
We’re very excited for this major update to the Power BI in Team experiences. This adds to our announcements about improved chat, feedback, and tab upgrade experiences. Head over ideas.powerbi.com to vote for further improvements we could make. Use the new Give us feedback experience to let us know what you think of our in-Teams experiences.
We’re not done yet with our start-of-year updates for Power BI in Microsoft Teams. Here are a few hints of what’s to come:
- A better way to handle context switches in Teams
- Even more powerful tab configuration options
|
OPCFW_CODE
|
A long time ago I built a radio using a Philips UV616/6456 TV tuner that is capable of receiving radio signals over a large range of frequencies. It ranges from 47MHz up to 860Mhz which gives me the possibility of decoding either Over-the-Air or Cable TV signals.
The problem is that the radio doesn't have a frequency display, so tuning a particular frequency is always a challenge. This project is about building a frequency counter, using a 2x16 LCD and a small PIC 18F1320 micro-controller for the UV616/6456 receiver.
Things taken into consideration while designing the frequency counter:
To measure a frequency there are two approaches that I know of:
I chose the pre-set time approach, using one timer to create the fixed measurement window and another timer as a counter to accumulate the number of pulses applied to its input pin, however the second approach usually gives better results when the event being measured repeats itself with sufficient stability and the frequency is considerably lower than that of the clock oscillator being used. The resolution of the measurement can be greatly improved by measuring the time required for an entire number of cycles, rather than counting the number of entire cycles observed for a pre-set duration (often referred to as the reciprocal technique) . The second approach was not implemented.
As for the frequency being measured, the UV616/6456 tuner doesn't output the currently tuned frequency. Instead it outputs the sum of that frequency with its internal I/F (37.3MHz) and then divides the sum by 256. This means when tuning 102.20 MHz we get (102.2e6+37.3e6)/256 = 544.921KHz.
The table below shows the output frequency ranges for every selector position on the radio. The output frequencies include the I/F and are divided by 256:
|Selector||Tuner Min||Tuner Max||Output Min||Output Max|
|P1 VHF||47 MHz||110 MHz||329.3 KHz||575.4 KHz|
|P2 VHF||110 MHz||300 MHz||575.4 KHz||1.318 MHz|
|P3 UHF||300 MHz||470 MHz||1.318 MHz||1.982 MHz|
|P4 UHF||470 MHz||860 MHz||1.982 MHz||3.505 MHz|
These output frequencies range from 329.3KHz to 3.5MHz and will be injected into the micro-controller's counter input.
The display is a 2x16 LCD in 4 bit mode.
The schematic is below:
It consists of a complete linear power supply built using a 78L05, a signal shaper to change the sinusoidal input into a square wave that is fed into the timer input pin T0CKI and the 2x16 LCD.
Timer 0 counts the pulses from the radio frequency output and timer 1 creates the pre-set time for timer 0 to count pulses.
The printed circuit board developed for the prototype is single sided with 3 jumper wires.
The connector on left is the AC input from the transformer. The connector in the middle, below the PIC is the frequency input. Its right pin connects to the coaxial's middle wire and the left pin connects to the copper mesh shield. The contrast of the LCD is adjusted on the 10K trimmer. The header near the trimmer is for future expansion and has no use at this time.
The software is written in C and implements a frequency counter based on counting frequency pulses on a pre-set time window. Timer 0 counts the pulses on its T0CKI pin. Timer 1 counts the pre-set time window. When the window expires, the counting of pulses stops.
This process repeats it self after the measured frequency is presented on screen.
To reduce the error of the measure, the number of pulses collected should be as large as possible, in the fixed time window, thus increasing the time window. To give the user the notion of real time measure, the time window should not be too big. And the worst problem: Timers can count only up to a limited amount, 65535.
So I set two different scales:
With a prescaler of 1:16 and a pre-set time of 0.496s seconds the frequency counter has a precision of 2 decimal places. But when the prescaler gets set at 1:64 and a pre-set time of 0.16384s the precision drops to 1 decimal place only.
The next table shows the frequencies involved, pre-set times and so on:
|Range||Tuner Min||Tuner Max||Prescaler||Min freq after prescaler||Max freq after prescaler||Pre-set time|
Using these pre-set window times the values obtained in timer 0 will be:
|Range||Prescaler||Min Freq||Max Freq||Pre-set time||Min Count||Max Count|
The micro-controller main oscillator runs from a 2MHz crystal. No interrupts are needed since everything runs freely in a loop with two tasks: measure and display.
This is how it looks from the outside:
And from the inside:
Some screens of the unit running
Power on message on the left and No signal on the right
Listening to "Antena 3" at 102.20MHz while testing
This is the Family picture with the frequency counter on top and the radio below.
All downloads are free:
Published on Friday 2009/07/10, last modified on Thursday 2012/02/23
|
OPCFW_CODE
|
Prof. Ram Bilas Pachori
Indian Institute of Technology Indore, India
Ram Bilas Pachori received the B.E. degree with honours in Electronics and Communication Engineering from Rajiv Gandhi Technological University, Bhopal, India in 2001, the M.Tech. and Ph.D. degrees in Electrical Engineering from Indian Institute of Technology Kanpur, India in 2003 and 2008, respectively.
He worked as a Post-Doctoral Fellow at Charles Delaunay Institute, University of Technology of Troyes, France during 2007-2008. He served as an Assistant Professor at Communication Research Center, International Institute of Information Technology, Hyderabad, India during 2008-2009. He served as an Assistant Professor at Department of Electrical Engineering, Indian Institute of Technology Indore, India during 2009-2013. He worked as an Associate Professor at Department of Electrical Engineering, Indian Institute of Technology Indore during 2013-2017 where presently he has been working as a Professor since 2017. Currently, he is also associated with Center for Advanced Electronics at Indian Institute of Technology Indore. He was a Visiting Professor at Neural Dynamics of Visual Cognition Lab, Free University of Berlin, Germany during July-September, 2022. He has served as a Visiting Professor at School of Medicine, Faculty of Health and Medical Sciences, Taylor’s University, Malaysia during 2018-2019. Previously, he has worked as a Visiting Scholar at Intelligent Systems Research Center, Ulster University, Londonderry, UK during December 2014.
His research interests are in the areas of Signal and Image Processing, Biomedical Signal Processing, Nonstationary Signal Processing, Speech Signal Processing, Brain-Computer Interfacing, Machine Learning, and Artificial Intelligence & Internet of Things in Healthcare.
He is an Associate Editor of Electronics Letters, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Biomedical Signal Processing and Control and an Editor of IETE Technical Review journal. He is a senior member of IEEE and a Fellow of IETE, IEI, and IET. He has served as member of review boards for more than 100 scientific journals. He has also served in the scientific committees of various national and international conferences. He has delivered more than 250 talks and lectures in conferences, workshops, short term courses, and academic events organized by various institutes. He has been ranked #21 in India among top scientists for 2022 in the field of Computer Science by Research.com website (March, 2022). He has been listed in the world's top 2 % scientists in the study carried out at Stanford University, USA (October, 2020, October, 2021, and October, 2022). He has received several awards including Achievement Award (IICAI conference, 2011), Best Paper Award (ICHIT conference, 2012), Excellent Grade in the Review of Sponsored Project (DST, 2014), Best Research Paper Awards (IIT Indore, 2015 & 2016), Premium Awards for Best Papers (IET Science, Measurement & Technology journal, 2019 & 2020), IETE Prof. SVC Aiya Memorial Award (2021), and Best Paper Award (DSPA Conference, 2022).
He has supervised 14 Ph.D., 23 M.Tech., and 42 B.Tech. students for their theses and projects (15 Ph.D., 03 M.Tech., 01 M.S. (by Research), and 07 B.Tech. under progress). He has 266 publications which include journal papers (164), conference papers (72), books (08), and book chapters (22). He has also three patents: 01 Australian patent (granted) and 02 Indian patents (filed). His publications have been cited approximately 12,000 times with h-index of 57 according to Google Scholar. He has worked on various research projects with funding support from SERB, DST, DBT, CSIR, and ICMR.
Prof. Gang Wang
School of Automation, Beijing Institute of Technology, China
Dr. Gang Wang received a B.Eng. degree in Automatic Control in 2011, and a Ph.D. degree in Control Science and Engineering in 2018, both from the Beijing Institute of Technology, Beijing, China. He also received a Ph.D. degree in Electrical and Computer Engineering from the University of Minnesota, Minneapolis, USA, in 2018, where he stayed as a postdoctoral researcher until July 2020. Since August 2020, he has been a professor with the School of Automation at the Beijing Institute of Technology.
His research interests focus on the areas of signal processing, control, and reinforcement learning with applications to cyber-physical systems and multi-agent systems. He was the recipient of the Best Paper Award from the Frontiers of Information Technology & Electronic Engineering (FITEE) in 2021, the Excellent Doctoral Dissertation Award from the Chinese Association of Automation in 2019, the Best Conference Paper at the 2019 IEEE Power & Energy Society General Meeting, and the Best Student Paper Award from the 2017 European Signal Processing Conference. He is currently on the editorial boards of Signal Processing, Actuators, and IEEE Transactions on Signal and Information Processing over Networks.
|
OPCFW_CODE
|
“Transfer” Folders and Printing
When previewing a report, please DO NOT click the printer icon from that menu. There is no guarantee where the output will go. Instead, you should choose “Go” or “Print” on the previous screen.
Finding your reports
When MemInfo creates .pdf, .doc, and .xls files, the files are placed into the following folder on your own computer:
Here’s a suggestion: Next to your MemInfo xxxxxx-x icon, create another icon that is a shortcut to your C:\MemInfoTransfer folder. Go to Printer Setup to find out how to do this.
(You can change the default location where your .pdf files should go. See below.)
If a report is produced but you cannot find it in your C:\MemInfoTransfer folder, then on the server choose File | About from the menu. “Local Transfer Folder” should say “C:\MEMINFOTRANSFER on your computer”. [On a Mac, it should say “\\tsclient\meminfotransfer\”.] If you seen “(unable to connect)”, you need to re-establish your connection to the server before you will be able to print or transfer anything to your computer. To disconnect, simply click the x in the blue bar at the top of the screen. You do not have to exit from MemInfo, itself.
If you still have a problem finding your report, please contact MemInfo support with an email to 911 at MemInfo.com. (Please only use that email address for crucial issues. Otherwise, use Support at MemInfo.com.)
To change the location of your Transfer Folder
We encourage you to use the default C:\MemInfoTransfer folder on your computer for receiving .pdf files. To set that up, in MemInfo go to File | Setup | Preferences. Next to “File Transfer Folder on Your Computer” the box should say “\\tsclient\c\meminfotransfer\” If it does not, then click Browse, open Network, open tsclient, open \\tsclient\c, and click on your MemInfoTransfer folder. Then click Ok.
[On a Mac, it should say “\\tsclient\meminfotransfer\”.]
If you want your report files to go somewhere other than C:\MemInfoTransfer, choose File | Setup | Preferences
Where it says “File Transfer Folder on Your Computer,” click the Browse button.
[NOTE: Each time you click, you might need to wait several seconds for anything to happen.]
In the windows that pops up, click on Network. Then skip the next line and click on \\tsclient\C. Then click on the folder where you want the report to go. If you want it to go to your desktop, scroll down and click on Users, then click on your user name, click Desktop, and then click the Ok button. Be sure to verify that “Desktop” appears in the box on your Preferences screen.
NOTE: If you get the message “Not Responding”, you might have to wait 20 seconds to see a response.
From the server, you can bring up Windows Explorer. You will notice under “Computer” that there are 2 C drives. The first one refers to the C: drive on the Server, and the second one points to your own C: drive. Keeping this in mind, you can copy files between the Server and your own computer.
NOTE: If you bring up Windows Explorer on your own computer, you will not see the Server’s C drive. You must view the Server and click on the Explorer icon in that window.
|
OPCFW_CODE
|
RedisGraph is a Redis module that adds Graph database capabilities to Redis and enables organizations to process any kind of connected data faster compared to traditional relational or Graph databases. RedisGraph is also the first queryable Property Graph database to use sparse matrices to represent the adjacency matrix in graph, and linear algebra to query the graph.
Technology behind RedisGraph
In traditional graph databases, operations are typically based on data structures like the hexastore or adjacency list, which require developers to write less efficient code in order to execute graph traversal operations. For example, see this approach to “find the next BFS (Breadth-First-Search) level” below:
Instead, GraphBLAS (Basic Linear Algebra Subprograms), a highly optimized library for sparse matrix operations, represents graphs with an intuitive adjacency matrix: a square table with all nodes along each axis, and a value of ‘1’ for each connected pair. Our new engine uses linear algebra of matrices to execute graph traversal operations, which makes query processing much more efficient. For example, the matrix below represents the connected graph from the example above, in which each column (1-7) represents a source node and each row (1-7) represents a destination node:
Naively implemented, the traditional approach scales very poorly for graphs that model real-world problems, which tend to be very sparse. Space and time complexity for a matrix is governed by its dimensions (O(n²) for square matrices), making alternatives like adjacency lists more appealing for most practical applications with scaling requirements.
Benefits of Using GraphBLAS
GraphBLAS defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in many different programming environments. The principal benefits of using matrices for graph algorithms are space efficiency and performance.
The GraphBLAS library gives developers the benefits of matrix representations while optimizing for sparse data sets. GraphBLAS encodes matrices in Compressed Sparse Column (CSC) form, which has a cost determined by the number of non-zero elements contained. In total, the space complexity of a matrix is:
(# of columns + 1) + (2 * # of non-zero elements)
This encoding is highly space-efficient, and also allows us to treat graph matrices as mathematical operands. As a result, database operations can be executed as algebraic expressions without needing to first translate data out of a form (like a series of adjacency lists). For example, finding the next BFS on the graph from above is as simple as joining the 1’s of column #1 with column #3, or in matrices algebra terms – multiplying the graph matrix with a filter vector for column #1 & #3:
The RedisGraph module, based on GraphBLAS engine, shows significant performance advantages over existing graph databases. Recently, Tiger Graph conducted a benchmark performance test where it came out significantly faster than all other existing Graph database solutions. So we decided to benchmark data loading and query performance of RedisGraph against Tiger. The results below show how RedisGraph is faster than Tiger and consequently all other Graph database that Tiger tested against.
Solving Graph problems with linear algebra
With GraphBLAS, the library invocations are direct mathematical operations, such as matrix multiplications and transposes. These calls implement a lazy evaluation and execution strategy to reduce computation time and the calls are optimized behind the scenes.
Additionally, with the adoption of the CSC approach, RedisGraph allows indexed access to a column of the matrix. Non-zero vector stores the non-zero elements of each column of the matrix and the column vector stores the pointer to the first non-zero element of each column. This significantly accelerates complex query response times, with multiple hops, on large data sets.
Support for a declarative query language
Cypher is a very popular declarative pattern matching language created for the purposes of querying graph data representations effectively. RedisGraph implements and supports the Cypher language (the industry standard and widely adopted query language for graph databases) and automatically translates them to linear algebraic expressions. Learn more about our Cypher coverage here.
In addition, Redis Labs is working closely with other Graph database vendors on the Graph Query Language (GQL) taskforce to create a standardized declarative language for Graph database.
How to Get Started
|
OPCFW_CODE
|
Template Matching of a single template with multiple source images
I have a template "X"(symbol) which is cropped out of "Image1". I am using OpenCV's matchTemplate() method to match the template "X" with "Image1" and it does that successfully. However, I have another image called "Image2" which has contains the X symbol, but when I'm using the template "X" to match with "Image2" , it's showing an invalid match. Any help would be much appreciated.
def match_template(img_path, img_template_path):
if img_path is not None:
img = cv.imread(img_path, 0)
if img is not None:
template = cv.imread(img_template_path, 0)
temp_h, temp_w, img_h, img_w = None, None, None, None
if len(img.shape) > 2:
print("This image is in color..")
print("Converting it into a grayscale image")
img = cv.cvtColor(src=img, code=cv.COLOR_BGR2GRAY)
else:
temp_w, temp_h = template.shape[::-1]
img_w, img_h = img.shape[::-1]
# ims = cv.resize(img, (960, 540))
if temp_h and img_h is not None:
if temp_w < img_w and temp_h < img_h:
res = cv.matchTemplate(img, template, cv.TM_SQDIFF)
# loc = np.where(res >= 0.9)
min_val, max_val, min_loc, max_loc = cv.minMaxLoc(res)
threshold = 0.9
match = False
if np.max(res) > threshold:
match = True
# Take minimum since we are using TM_SQDIFF
if match is True:
top_left = min_loc
bottom_right = (top_left[0] + temp_w, top_left[1] + temp_h)
cv.rectangle(img=img, pt1=top_left, pt2=bottom_right, color=(0, 255, 0), thickness=5)
# plt.subplot(121), plt.imshow(res, cmap='gray')
# plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.imshow(img, cmap='gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.show()
print("Coordinates of the plotted image are ", top_left, " ", bottom_right)
cv.waitKey(0)
else:
print("Template not matched with the image")
else:
print("Template height and width must be less than origninal image's height and width \n")
else:
print("Image heigth and template height are None")
else:
print("Image not read successfully!!")
else:
print("Image path not provided")
Can you upload the template and images?
Both the source images are almost similar due to which I only uploaded one.
Would you upload both source images? It would help to see what's different between two images. You might also try and set a lower threshold.
You will get a better match if you do template matching with the colored image and colored template. OpenCV matchTemplate() allows that. See the documentation.
First of all, TemplateMatching only works in the images almost same. Small changes on the desired objects included in new frames can create difficulties to make a good match.
In your case, the first example is working properly because you just cropped the template from it. Like the examples also did exactly like that: Example 1 and Example 2.
I dont suggest using TemplateMatching alone. matchShape function is also a very effective function. You can get the contour arrays of your template image(for example symbol x contour) and compare with the other contours. My suggestion is you should support your TemplateMatching function with the other OpenCV structural functions.
The same problems are also mentioned in these posts: here and also here
|
STACK_EXCHANGE
|
Math behind the formula for radiance (from radiometry)
could someone, please, help me to understand how to interpret this formula $L=\frac{d^2\Phi}{dA dw}$ ($\Phi$ - radiant flux, $A$ - unit area, $w$ unit solid angle) as a radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area.
I have a basic knowledge of multivariable calculus.
I know that $\Phi$ is a radiant flux which is "radiant energy emitted, reflected, transmitted, or received per unit time". Starting from this we're taking a partial derivative first let's say with respect to $A$ which gives us some function that indicates how our radiant flux $\Phi$ will change with change in area. Then we take another derivative from our result and now I'm trying to interpret this: $\frac{\frac{d\Phi}{dA}}{dw}$. I see it as a change in our "change with area" function with respect to $w$ and visualising it like we're fixating $A$ to some constant value $a$ in this function $S(A,w)=\frac{d\Phi}{dA}$ (let the name be $S$) and then we're evaluating how our $S$ changes with respect to $w$ at each $(a,w_i)$ point.
So, the resulting function gives us, by my interpretation, how change in $\Phi$ with respect to $A$ changes with respect to $w$, and as you can tell, it's not radiant flux per unit solid angle per unit projected area.
What am I missing in my reasoning ?
In short: Radiance is useful because it indicates how much of the power emitted, reflected, transmitted or received by a surface will be received by an optical system looking at that surface from a specified angle of view.
My interpretation of differentials is what have confused me, I think.
It helped me to simplify the radiance formula $L(A,w)=\frac{d^2\Phi}{dA dw}$ by removing differentials, so that the amount of energy that light source gives per particular angle is constant, so it spreads it equally in every direction. And also I've treated it like any piece of the source area gives the same amount of energy (energy / area = same constant at every source point). This way dividing amount of energy by particular angle we get amount of energy in particular direciton, and then if we divide by the area, we get amount of energy per particular direction per particular point of the source ($L(A,w)=\frac{\frac{\Delta\Phi}{\Delta w}}{\Delta A}$). Which is exactly what wikipedia tells.
And to go backwards to differentials (our initial radiance formula) we consider non-equal spread of energy per direction ($dw$) and non-equal spread of energy per source area piece ($dA$).
(I hope I got this right, still getting confused when see differentials)
|
STACK_EXCHANGE
|
dimensions={
w:parseInt($(window).innerWidth()),
h:parseInt($(window).innerHeight()),
circle: 20
}
Animator = function(){
this.paused = false;
this.toggle=function(){
this.paused= !this.paused;
};
this.play=function(){
this.paused=false;
};
this.pause=function(){
this.paused=true;
};
this.reset=function(){
resetFftCircles(anim);
};
}
function sanityCheck() {
var loader = q.now();
if (loader) {
if (loader._playing == loader._paused) {
console.log("INSANITY!");
loader._playing = !loader._paused;
}
}
}
anim = new Animator();
function resizeDimensions(elem,width,height){
//calc scale coefficients and store current position
var scaleX = width/elem.bounds.width;
var scaleY = height/elem.bounds.height;
var prevPos = new Point(elem.bounds.x,elem.bounds.y);
//apply calc scaling
elem.scale(scaleX,scaleY);
//reposition the elem to previous pos(scaling moves the elem so we reset it's position);
var newPos = prevPos + new Point(elem.bounds.width/2,elem.bounds.height/2);
elem.position = newPos;
}
var durationLoop = new Group();
durationLoop.bringToFront();
var previousTime = 0;
durationLoop.onFrame = function(e) {
var timeObj = q.time();
if (Math.round(previousTime) != Math.round(timeObj.current) && !scrubbing) {
$('.time.time-current').html(timeToStringFormat(timeObj.current, TIME_FORMAT));
q.updateProgressBars(timeObj);
$('.time.time-left').html(timeToStringFormat(timeObj.diff, TIME_FORMAT));
}
previousTime = timeObj.current;
}
var fftCircles=new Group();
for(var i = 0; i < 16; i++){ // i(totalwidth/32)+totalwidth/16
c1= new Path.Circle(new Point((i*dimensions.w/16), dimensions.h/2), dimensions.circle);
c1.fillColor="white";
c2=c1.clone();
c1.fillColor="black";
var tempG=new Group();
tempG.addChildren([c1,c2]);
fftCircles.addChild(tempG);
}
var resetFftCircles = function(anim) {
for(var i=0; i<fftCircles.children.length;i++){
var cG=fftCircles.children[i];
for(var j=0;j<2;j++){
cG.children[j].position.y=dimensions.h/2;
}
}
}
var waveFormCrv = new Path();
waveFormCrv.strokeColor = 'black';
waveFormCrv.strokeWidth = 5;
for (var i=0; i<256; i++) {
waveFormCrv.add(new Point(i*(dimensions.w/255),0));
}
// Define two points which we will be using to construct
// the path and to position the gradient color:
var topLeft =[0,0];
var bottomRight = [$("#visualizer").width(),$("#visualizer").height()];
// Create a rectangle shaped path between
// the topLeft and bottomRight points:
var gradientBg = new Path.Rectangle({
topLeft: topLeft,
bottomRight: bottomRight
});
// Fill the path with a gradient of three color stops
// that runs between the two points we defined earlier:
gradientBg.fillColor= {
gradient: {
stops: ['blue', 'blue','blue'],
radial:true
},
origin:gradientBg.position,
destination: gradientBg.bounds.rightCenter
};
fftCircles.bringToFront();
waveFormCrv.bringToFront();
waveFormCrv.smooth({ type: 'catmull-rom', factor: 0.8 });
var bright=0;
gradientBg.onFrame = function(event){
if(anim.paused) return;
if(event.count % 1 == 0){
var currentSpec=fft.analyze();
peaking = detectPeak();
if(peaking){
console.log('pk');
bright = 1;
} else {
bright *= .99;
}
var c1 = getColorFromAmplitude(255,1,0,50),
c2 = getColorFromAmplitude(255,1,50,150),
c3 = getColorFromAmplitude(255,1,150,255);
var newHue=[c1, c2, c3];
var colour= this.fillColor;
for (clr in colour.gradient.stops) {
colour.gradient.stops[clr].color.hue=newHue[clr] * 60 + 180;
colour.gradient.stops[clr].color.brightness=bright;
}
}
}
// the directions of the circles
var dirs=[1,-1];
var gravity = 5;
fftCircles.onFrame = function(event) {
sanityCheck();
if(anim.paused) return;
var currentAvgs = getSubdividedAvg(trimZeroes(fft.analyze()));//get averages
for (var i in fftCircles.children) {
var currChild = fftCircles.children[i];
for (var j = 0; j < 2; j++){
var currCircle=currChild.children[j];
if (currCircle.position.y <= fftCircles.bounds.center.y - 300
&& dirs[j] == -1) {
currCircle.bringToFront();
dirs[0]*=-1;
dirs[1]*=-1;
}
if(currCircle.position.y>fftCircles.bounds.center.y+300 && dirs[j]==1){
dirs[0]*=-1;
dirs[1]*=-1;
}
var dy=((currentAvgs[i]/40));
//currCircle.(currentAvgs[i]/currCircle.scaling, currCircle.bounds.center);;
currCircle.translate(new Point(0,dy*dirs[j]));// here I set all my y values. for half they are positive.
}
currChild.position.y=dimensions.h/2;
}
fftCircles.bringToFront();
fftCircles.position=new Point(waveFormCrv.bounds.width/2,dimensions.h/2)
if(!q.isPlaying()) {
anim.reset();
}
}
waveFormCrv.onFrame=function(event){
if(anim.paused) return;
if(event.count%4==0){
var currentWave=getWaveform(200);
for(var i in waveFormCrv.segments){
waveFormCrv.segments[i].point.y=currentWave[i];
}
waveFormCrv.position=new Point(waveFormCrv.bounds.width/2,dimensions.h*2/3);
}
}
var wv2=waveFormCrv.clone();
wv2.strokeColor='white';
wv2.position=new Point(waveFormCrv.bounds.width/2,dimensions.h/3);
wv2.onFrame=function(event){
if(anim.paused) return;
if(event.count%4==0){
var currentWave=getWaveform(200);
for(var i in wv2.segments){
wv2.segments[i].point.y=currentWave[i];
}
wv2.position=new Point(wv2.bounds.width/2,dimensions.h/3);
}
};
$(window).resize(function(e){
//var oldPos= gradientBg.bottomRight;
dimensions.w= $(window).innerWidth();
dimensions.h= $(window).innerWidth();
resizeDimensions(gradientBg,$(window).innerWidth(),$(window).innerHeight());
waveFormCrv.position=new Point(waveFormCrv.width/2,dimensions.h/2);
fftCircles.position=new Point(fftCircles.bounds.width/2,dimensions.h/2);
});
|
STACK_EDU
|
we have x number of HDX boxes which are out on the internet. We tried multiple ways to get an addressbook in there, based on the central CMA configuration. This was either done by provisioning the system, or specifying a gds server. Sadly enough, i never succeeded in maintaining stable connections. I have found no reason so far. Polycom which does not recover when it goes in sleep mode? I don't know. So endpoints lost their ldap and gds server connection from time to time, resulting in a manual reboot of the endpoint to get the entries in the address book back. Frustration with endusers, because rebooting is not a 1-2-3 operation and thus time consuming. So i'm evaluating going back to the standalone config.
I noticed there is an import/export facility in the HDX for address entries. Is there a tool to administer this? Or any other way to sort of replicate an address book? Thx!
With GDS you have the opportunity to "Save Global Directory to System" (refer to HDX Admin's Guide -> Configure the Global Directory): "When enabled, this setting allows Polycom HDX systems to display global entries in the directory in case the system loses connection with the Polycom GDS Directory."
You can replicate address book using the API commands exportdirectory / importdirectory. More details can be found in HDX Integrator's Reference Manual, pp. 242 and 307.
Hope this could help,
Okay, problems i'm facing are somewhat more complex. First of all, i can't use gds, since i have systems out on the internet. Only way to get them the addressbook, is to provision them. I don't want this since then they are dependant on the VBPST.
Secondly, problem with the GAB is that i have different ip's for each endpoint, depending on where the system is located (some endpoints are in our internal lan, others out there on the internet). This means if you're calling from the internal lan to an endpoint on the internal lan, you specify for example ip A. If you call the same system from the internet, you specify ip B. There is a way around it, which is by specifying dns names. But then again, if you provision systems on the internet, you rely on dns servers for the internal lan. Which is incorrect. So you see, not an easy solution here. Perhaps the best way would be to move our internal endpoints to the internet.
So far i've found we can create an addressbook on one endpoint. Export this one, and import it to another endpoint. First catch is you need to create the addressbook on an endpoint with the lowest firmware.
|
OPCFW_CODE
|
I watched the video, and overall liked it. Since you asked for feedback: with the video format, I think I would prefer to see the question and be told to pause to have time to think of the answer before it appears instead of having a set amount of time for each question, since with at least some of them you either know it or you don't right off the bat. As it is, I got a little annoyed waiting a minute for the answer if I already knew it.
The other main comment is something DejMar sort of alluded to, that there are potentially multiple answers that would make sense for some of the riddles. In particular the second one, I also interpreted it as most likely being a gotcha where each number in the sequence is (n)/(n+1), so the final term x/1000 comes after 9/10 and should equal 10/11, meaning x = 10 x 1000 / 11. Only after realizing that it wouldn't be an integer did I decide that it probably wasn't what you intended to ask, so the answer should be the other thing I had in mind and be x = 999. With the light switch, when I saw it here I thought there must be three positions with something like "off" going to "medium", "medium" going either to "off" or to "high", and "high" going only to "medium" so there would be a unique answer -- after any even number of flips the switch must be back at "medium" -- whereas with the youtube version if you change directions during flipping you could either end up at the original position or 180 degrees away. For the question of painting 8s, I could have interpreted a couple of ways: you could argue that he would only paint 8 once (if it's referring to house number 8, or just the number 8 and not other numbers that happen to have 8 as a digit), that he would paint it 20 times (if you mean the total number of digits that are 8), or maybe even 19 times (if you mean the total number of houses with any 8 on them, although that's a less likely interpretation). DejMar commented on the ambiguity of whether the question with Little Johnny is talking about making it home with the original $300 dollars or the money that the man is offering, but I suppose that ambiguity needs to be present or else it wouldn't be much of a riddle. And the last question seems like it might be a bit offensive if asked to a woman.
It might not be possible to make the questions entirely unambiguous, especially the question about Little Johnny since the ambiguity is what makes it a riddle in the first place, but sometimes simple things like saying "how many times does he have to paint the digit 8" can help make it unambiguous. In general, I would say to check for (and ask other people to check for) unintended ways that the questions might be interpreted.
|
OPCFW_CODE
|
Novel–Chaotic Sword God–Chaotic Sword God
Chapter 2958 – Seeing He Qianqian Again trees amuse
The customs in the Heavenly Crane clan ended up a little bit diverse. If any outsiders stopped at, they had to check out the Divine Town of Heavenly Crane, which could complete on the meaning into the clan. They might only allow them to in once the top echelon from the clan naturally permission.
Quite some time in the future, He Qianqian transformed around and remaining working out reasons without indicating anything at all by any means. 2 hours later on, she got presently left behind the Divine Crane clan and came out from the Divine Town of Divine Crane, doing her way on the community lord’s residence.
This town was called the Divine City of the Perfect Crane!
She right away transformed directions and flew into the snowy fir forest outside of the Divine City of Perfect Crane.
At this moment, He Qianqian’s respected maidservant provided a hardwood pack right before He Qianqian and transferred it to her.
vocal expression definition
He Qianqian claimed almost nothing. Her gaze was repaired on Jian Chen, at times mixed, sometimes sharpened, and in some cases freezing. It turned out quite totally obvious she was packed with merged sentiments today.
The area lord without delay given back on the Perfect Crane clan with all the tablet computer at the earliest opportunity. Eventually, the tablet pc attained He Qianqian’s arms just after simply being handed down through lots of people.
Many guards dressed in snow-white colored armour using the farming of Gods endured as right as spears, protecting the entrance with the real estate loyally.
She immediately altered instructions and flew to the snowy fir woodland beyond your Divine Town of Heavenly Crane.
” Jian Chen hovered higher than the icy-frosty tundra and gazed for the snowfall-white metropolis many dozen kilometers away prior to taking one step.
Currently, on some instruction grounds inside the Perfect Crane clan, He Qianqian wore some white-colored, tight-fitted robes that completely detailed her slim and stylish body. Presently, she presented a sword, having just unleashed a Our god Tier Conflict Ability, which built strength increase via the teaching reasons. The divine might associated with a The lord Level Fight Skill slowly receded.
When he required the phase, his figure quickly vanished. Once he reappeared, he was currently ranking in the Divine City of Perfect Crane.
find a book without a title
He Qianqian’s clothing were even whiter as opposed to snowfall. As she withstood in the world of ice, she did actually grow to be one particular by using it. She preserved a thirty-gauge-long distance between her and Jian Chen, and her gaze towards Jian Chen was extremely blended.
He Qianqian stated practically nothing. Her gaze was repaired on Jian Chen, from time to time merged, at times razor-sharp, and in most cases frosty. It had been quite clear that she was filled up with combined feelings today.
“Who are you particularly?” Only a fairly while down the road performed He Qianqian talk. She realised she experienced never truly gotten to find out the Yang Yutian prior to her.
Jian Chen nodded.
Soon, He Qianqian observed the familiar shape inside the woodland.
“I’ve appear hoping how the community lord can help me with something. I really hope the town lord can successfully pass this pc tablet onto He Qianqian on the Incredible Crane clan for me,” Jian Chen thought to the area lord while he got out a capsule. In the mean time, he intentionally presented off the inclusion of a Chaotic Best.
Underneath the icy hill was really a large town completely etched out from ice cubes.
Town lord’s residence was appropriate when in front of Jian Chen!
Right away, a defend came right before Jian Chen and expected, “Senior, how may I be of assistance?”
The metropolis was referred to as the Divine City of the Heavenly Crane!
The town was named the Divine City of the Heavenly Crane!
The Divine Crane clan was similar to a hermit clan about the Ice Pole Airplane. Less strong cultivators even experienced no idea concerning the Incredible Crane clan’s lifestyle.
At this moment, on some exercising reasons inside the Perfect Crane clan, He Qianqian wore a collection of bright, limited-appropriate robes that completely specified her slender and stylish body. At this time, she retained a sword, having just unleashed a Lord Tier Combat Skill, which made electricity spike from the training grounds. The incredible might of the Our god Tier Fight Competency slowly receded.
Having only used a Lord Level Struggle Expertise, He Qianqian appeared rather out of breath. She washed away her sweating and opened the wooden box in an exceedingly unconcerned way.
the witcher nameless
He Qianqian grabbed the tablet instinctively. Her intellect was in a daze, and her sensations were actually merged.
Regressor Takes Everything
“Yang Yutian shouldn’t become the perfect true visual appearance. Your existing overall look needs to be a conceal developed through some kind of special method also.” He Qianqian explained. Her voice was rather frosty.
The area was referred to as Divine City of the Incredible Crane!
It absolutely was not just for the Incredible Crane clan. This was a custom made implemented by many highest organisations for the Ice cubes Pole Jet.
Most likely since the energy Jian Chen acquired shown was way too good, the area lord dared not brush him away, let alone diminish Jian Chen’s ask for.
A Magic Of Nightfall
At this point, on some education reasons inside the Divine Crane clan, He Qianqian wore some white, tight-fitted robes that completely defined her thinner and lovely number. At this time, she retained a sword, having only unleashed a The lord Tier Combat Expertise, which built vigor surge via the education grounds. The incredible might of your Lord Tier Struggle Competency slowly receded.
Jellynovel Chaotic Sword God online – Chapter 2958 – Seeing He Qianqian Again company redundant to you-p1
Novel–Chaotic Sword God–Chaotic Sword God
|
OPCFW_CODE
|
describe('Rivets.Binding', function() {
var model, el, view, binding, opts
beforeEach(function() {
rivets.prefix = 'data'
adapter = rivets.adapters['.']
el = document.createElement('div')
el.setAttribute('data-text', 'obj.name')
view = rivets.bind(el, {obj: {name: 'test'}})
binding = view.bindings[0]
model = binding.model
})
it('gets assigned the proper binder routine matching the identifier', function() {
binding.binder.routine.should.equal(rivets.binders.text)
})
describe('bind()', function() {
it('subscribes to the model for changes via the adapter', function() {
sinon.spy(adapter, 'observe')
binding.bind()
adapter.observe.calledWith(model, 'name', binding.sync).should.be.true
})
it("calls the binder's bind method if one exists", function() {
binding.bind.should.not.throw()
binding.binder.bind = function(){}
sinon.spy(binding.binder, 'bind')
binding.bind()
binding.binder.bind.called.should.be.true
})
describe('with preloadData set to true', function() {
beforeEach(function() {
rivets.preloadData = true
})
it('sets the initial value', function() {
sinon.spy(binding, 'set')
binding.bind()
binding.set.calledWith('test').should.be.true
})
})
describe('with dependencies', function() {
beforeEach(function() {
binding.options.dependencies = ['.fname', '.lname']
})
it('sets up observers on the dependant attributes', function() {
binding.bind()
adapter.observe.calledWith(model, 'fname', binding.sync).should.be.true
adapter.observe.calledWith(model, 'lname', binding.sync).should.be.true
})
})
})
describe('unbind()', function() {
describe('without a binder.unbind defined', function() {
it('should not throw an error', function() {
binding.unbind.should.not.throw()
})
})
describe('with a binder.unbind defined', function() {
beforeEach(function() {
binding.binder.unbind = function(){}
})
it('should not throw an error', function() {
binding.unbind.should.not.throw()
})
it("calls the binder's unbind method", function() {
sinon.spy(binding.binder, 'unbind')
binding.unbind()
binding.binder.unbind.called.should.be.true
})
})
})
describe('set()', function() {
it('performs the binding routine with the supplied value', function() {
sinon.spy(binding.binder, 'routine')
binding.set('sweater')
binding.binder.routine.calledWith(el, 'sweater').should.be.true
})
it('applies any formatters to the value before performing the routine', function() {
view.formatters.awesome = function(value) { return 'awesome ' + value }
binding.formatters.push('awesome')
sinon.spy(binding.binder, 'routine')
binding.set('sweater')
binding.binder.routine.calledWith(el, 'awesome sweater').should.be.true
})
it('calls methods with the object as context', function() {
binding.model = {foo: 'bar'}
sinon.spy(binding.binder, 'routine')
binding.set(function() { return this.foo })
binding.binder.routine.calledWith(el, binding.model.foo).should.be.true
})
})
describe('publish()', function() {
it("should publish the value of a number input", function() {
numberInput = document.createElement('input')
numberInput.setAttribute('type', 'number')
numberInput.setAttribute('data-value', 'obj.num')
view = rivets.bind(numberInput, {obj: {num: 42}})
binding = view.bindings[0]
model = binding.model
numberInput.value = 42
sinon.spy(adapter, 'set')
binding.publish({target: numberInput})
adapter.set.calledWith(model, 'num', '42').should.be.true
})
})
describe('publishTwoWay()', function() {
it('applies a two-way read formatter to function same as a single-way', function() {
view.formatters.awesome = {
read: function(value) { return 'awesome ' + value }
}
binding.formatters.push('awesome')
sinon.spy(binding.binder, 'routine')
binding.set('sweater')
binding.binder.routine.calledWith(el, 'awesome sweater').should.be.true
})
it("should publish the value of a number input", function() {
rivets.formatters.awesome = {
publish: function(value) { return 'awesome ' + value }
}
numberInput = document.createElement('input')
numberInput.setAttribute('type', 'number')
numberInput.setAttribute('data-value', 'obj.num | awesome')
view = rivets.bind(numberInput, {obj: {num: 42}})
binding = view.bindings[0]
model = binding.model
numberInput.value = 42
binding.publish({target: numberInput})
adapter.set.calledWith(model, 'num', 'awesome 42').should.be.true
})
it("should format a value in both directions", function() {
rivets.formatters.awesome = {
publish: function(value) { return 'awesome ' + value },
read: function(value) { return value + ' is awesome' }
}
valueInput = document.createElement('input')
valueInput.setAttribute('type','text')
valueInput.setAttribute('data-value', 'obj.name | awesome')
view = rivets.bind(valueInput, {obj: { name: 'nothing' }})
binding = view.bindings[0]
model = binding.model
valueInput.value = 'charles'
binding.publish({target: valueInput})
adapter.set.calledWith(model, 'name', 'awesome charles').should.be.true
sinon.spy(binding.binder, 'routine')
binding.set('fred')
binding.binder.routine.calledWith(valueInput, 'fred is awesome').should.be.true
})
it("should not fail or format if the specified binding function doesn't exist", function() {
rivets.formatters.awesome = { }
valueInput = document.createElement('input')
valueInput.setAttribute('type','text')
valueInput.setAttribute('data-value', 'obj.name | awesome')
view = rivets.bind(valueInput, {obj: { name: 'nothing' }})
binding = view.bindings[0]
model = binding.model
valueInput.value = 'charles'
binding.publish({target: valueInput})
adapter.set.calledWith(model, 'name', 'charles').should.be.true
binding.set('fred')
binding.binder.routine.calledWith(valueInput, 'fred').should.be.true
})
it("should apply read binders left to right, and write binders right to left", function() {
rivets.formatters.totally = {
publish: function(value) { return value + ' totally' },
read: function(value) { return value + ' totally' }
}
rivets.formatters.awesome = {
publish: function(value) { return value + ' is awesome' },
read: function(value) { return value + ' is awesome' }
}
valueInput = document.createElement('input')
valueInput.setAttribute('type','text')
valueInput.setAttribute('data-value', 'obj.name | awesome | totally')
view = rivets.bind(valueInput, {obj: { name: 'nothing' }})
binding = view.bindings[0]
model = binding.model
binding.set('fred')
binding.binder.routine.calledWith(valueInput, 'fred is awesome totally').should.be.true
valueInput.value = 'fred'
binding.publish({target: valueInput})
adapter.set.calledWith(model, 'name', 'fred totally is awesome').should.be.true
})
it("binders in a chain should be skipped if they're not there", function() {
rivets.formatters.totally = {
publish: function(value) { return value + ' totally' },
read: function(value) { return value + ' totally' }
}
rivets.formatters.radical = {
publish: function(value) { return value + ' is radical' },
}
rivets.formatters.awesome = function(value) { return value + ' is awesome' }
valueInput = document.createElement('input')
valueInput.setAttribute('type','text')
valueInput.setAttribute('data-value', 'obj.name | awesome | radical | totally')
view = rivets.bind(valueInput, {obj: { name: 'nothing' }})
binding = view.bindings[0]
model = binding.model
binding.set('fred')
binding.binder.routine.calledWith(valueInput, 'fred is awesome totally').should.be.true
valueInput.value = 'fred'
binding.publish({target: valueInput})
adapter.set.calledWith(model, 'name', 'fred totally is radical').should.be.true
})
})
describe('formattedValue()', function() {
it('applies the current formatters on the supplied value', function() {
view.formatters.awesome = function(value) { return 'awesome ' + value }
binding.formatters.push('awesome')
binding.formattedValue('hat').should.equal('awesome hat')
})
describe('with a multi-argument formatter string', function() {
beforeEach(function() {
view.formatters.awesome = function(value, prefix) {
return prefix + ' awesome ' + value
}
binding.formatters.push("awesome 'super'")
})
it('applies the formatter with arguments', function() {
binding.formattedValue('jacket').should.equal('super awesome jacket')
})
})
})
describe('getValue()', function() {
it('should use binder.getValue() if present', function() {
binding.binder.getValue = function(el) {
return 'foo'
}
binding.getValue(el).should.equal('foo')
})
it('binder.getValue() should have access to passed element', function() {
binding.binder.getValue = function(el) {
return el.dataset.foo
}
el.dataset.foo = 'bar'
binding.getValue(el).should.equal('bar')
})
it('binder.getValue() should have access to binding', function() {
binding.binder.getValue = function(el) {
return this.foo
}
binding.foo = 'bar'
binding.getValue(el).should.equal('bar')
})
})
})
|
STACK_EDU
|
Icons are used to identify an app's purpose and is used as a launch mechanism on mobile devices and desktops. The app stores, Apple App Store, Google Play, Windows, Amazon and even progressive web apps all require at least one 'Application" icon be included with each app. There are several more various sizes of icons for various different mobile devices. Follow on to learn how to prepare icons for every device type.
Creating your own mobile app icons is not too difficult if you know what you need. In this tutorial we will be using 'Paint.net' graphic software to demonstrate how to create an application icon. We use this software platform because it is freeware and easy to use.
This tutorial is not going to tell you what to put into your icon that is up to you. However here are a list of things to keep in mind.
Step 2b. Select the 'paint bucket' icon. Place your curser inside the rectangle and click the left mouse button, this will fill in the blank space inside the border. We used a blue that matches a color of our website and is in high contrast to the yellow of the characiture. You will want to use a color that makes sense to you.
Step 2c. adding the bevel effect. From the "Effects" dropdown menu select 'Object --> 'Bevel Object.' A popup toolbox appears. From here you can change the size, colors and alternate lighting. For this exercise we accepted the default settings but feel free to play with the settings to achieve your desired look.
In th top tool bar "Corner size" can be changed in the "Corner size:" dropdown field. "Style" should be se to solid. Other options include 'dashed' 'dot' dot dash' and dash dot dot.'Step 3. Draw an outline (optional). Some mobile app icons will blend into the background too easily. If that happens they tend to lose the button effect. To correct that you can add a high contrast narrow outline to the icon.
In Paint.net select "shapes" shortcut key from the "Tools" toolbox (bottom of toolbox). Set the 'brush' size to 1, 2 or 3px depending on the size of your icon. Set the shape type to 'rounded rectangle.' Set background color to black in the "Colors" toolbox. Then draw a rectangle by placing your curser at location 0,0 (upper left corner). While holding down the left mouse button draw the curser to location 1024,1024, then let up the left mouse button. A thin rounded corner border appears that outlines the icon.Step 4. creating the centerpiece. Well that is pretty much up to you. For simplicity you could simply type in your businesses acornym. For example BWT for "Best Website Tools." Or MaM for "Miappmaker.com." We used a store bought characiture because it fits with this app so well. Step 5. Save your work. This is so important I will say it loud SAVE YOUR WORK In the 'File" dropdown menu select 'saveas'. A toolbox screen appears where you can name your file. Give it an appropriate name like "mobile-app-icon-1024." I like to add the size of the image to the back end of the filename. Make sure you save your original as a .pdn filetype. This format saves all the layers and their state (on/off).
Now that you saved the original as a .pdn you can safely resize it to create more versions of your icon. A popular size is 144x144px.Step 6. resizing. To resize mobile app icons in Paint.net select "Resize" from the "Image" dropdown menu in the top toolbar. A popup tool will appear. Make sure the "Maintain aspect ratio" option is checked. In the 'Width" field input the new size. The height value will be determined by Paint.net. Click "OK" to resize. The image will now be at the smaller size. Saves this as a new file. You must change to filename to prevent over writing your original artwork.
The App Stores prefer .png images for mobile app icons. When you 'saveas' a new image make sure you select the "Save as type" to be .png. This is important because .pngs allow for transparency.
This concludes the mobile app icons tutorial. As always if you have any questions feel free to contact BWT.
Creative Icon Solutions is a service provided by BWT to create custom icon sets for mobile and desktop applications. All apps require at least one icon to be associated with the app. However you can submit up 15 different sizes for various purposes. Different platforms, iOS, Android, Retina, desktop and more use various sizes for app icon display.
Do you need mobile app icons for your next project? Confused about which sizes of icon you need for which application? Why not get a set of similar icons and have them all covered?
Are you looking for an icon template? Check out our template and tutorial bundle just below. The downloadable zip file contains 10 pre-made sizes of PaintdotNet (.pdn) templates. Simply include your own centerpiece. Easy-peazy, nice and easy.
Our Price $9.97
Get these mobile app icons. Includes these sizes: 1024px. 512px.180px, 144px, 114px, 96px, 72px, 57px, 48px and 36px SQ. Plus this tutorial in a pdf document. The 1024px size has 3 backgrounds, red, green and blue. Use this templafe to build you own icon sets.
Why buy an icon set from BWT? Great support. We want you to be happy so we are always here for you. Drop us a line anytime you need help and we will be glad to help you out.
|
OPCFW_CODE
|
First and foremost, I'd like to apologize if this thread is in the wrong forum.
I noticed the problem after a three day holiday. After I got home I noticed that my PC was powered off which is peculiar since I never turn my PC off, but I figured that there had been a minor power outage while I was away. I had tried turning my PC back on and at first everything seemed fine, then I noticed that Windows was running particularly slow and that the sound was off (it sounded more like white noise than sound). This led me to try and reboot my system which is where all the problems came to fruition.
My computer's fans turned on, the hard drive began spinning, and all the lights came on, but then after about 5 secs it would power down and attempt to restart again. This restart cycle persisted for about 2 minutes before my PC gave up all together. I again pressed the power button which successfully started my computer, but again the computer was very slow so I decided that it may have been a virus. I started a full system scan with Avast! but after 5 minutes and 21 seconds the computer abruptly shut down again. It again attempted it's restart loop and quit again after about 2 minutes. I pressed the power button again and it powered up very slowly and abruptly shut off again after about 5 minutes.
I asked my roommate if anything was wrong with his machine. He notified me that while he was out our landlord had come turned off our power to replace a light switch, and after he had returned home from work he too had experienced problems with his machine. His hadn't showed the same problems as mine, but it was responding very slowly. He said his machine was acting strangely for about 6 reboots, but now it's just very slow.
I'm really hoping that someone can give me some aid in fixing this issue.
I don't have time at this point to give you links to my components but I will give you a basic component list.
Motherboard: Biostar (T series) B3 Revision
Processor: Intel i5 2500k
Ram: G. Skill 8gb 1600MHz
HDD: Seagate Barracuda Black 750GB
GFX: XFX Radeon HD5870 1GB
I will certainly edit at a later time with a more precise parts list with links.
You may unplug your power supply and remove the motherboard battery for about one minute to reset the bios. Just carefully press on the tang metal that holds the battery down and let it pop up enough to remove it. If this doesn't get your board to post, try another power supply.
I actually haven't tried your solution as of yet, but I did take my PC completely apart and blow all the dust out of it, and after that it seems to be working the way I remember it, so for now, I'm not going to mess with anything, but should it act up again I certainly will.
|
OPCFW_CODE
|
HTTP ERROR 500 in my site
hey, I use this plugin in my site, but after i update yesterday, i can't visit the sitmap page.
here it is: http://ionichina.com/sitemap.xml.
HI also have this issue!
I am sorry, we are unable to reproduce this.
@DongHongfei can you please provide the Rails logging (stack trace) of the error ?
+1, have this issue too.
Unless someone provides the Rails logging (stack trace) we're unable to understand what is going on, and we're unable to fix this issue.
@discoursehosting There're any commands line to do that?
Found, maybe that could help you!
Started GET "/sitemap.xml" for <IP_ADDRESS> at 2017-10-08 14:09:33 +0000
Processing by DiscourseSitemap::SitemapController#index as XML
Rendering plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb
Rendered plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/_header.erb (1.2ms)
Rendered plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb (46.0ms)
Completed 500 Internal Server Error in 149ms (ActiveRecord: 13.8ms)
ActionView::Template::Error (comparison of Fixnum with nil failed)
/var/www/discourse/plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb:3:in >' Started GET "/sitemap.xml" for <IP_ADDRESS> at 2017-10-08 14:09:35 +0000 Processing by DiscourseSitemap::SitemapController#index as XML Rendering plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb Rendered plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/_header.erb (0.8ms) Rendered plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb (17.3ms) Completed 500 Internal Server Error in 145ms (ActiveRecord: 104.4ms) ActionView::Template::Error (comparison of Fixnum with nil failed) /var/www/discourse/plugins/discourse-sitemap/app/views/discourse_sitemap/sitemap/index.erb:3:in >'
Thank you. Very weird though.
Can you please tell me what your value of the sitemap_topics_per_page site setting is?
As they were by default - 10000!
Must change it?
I did change this number to 1000 but nothing changed!
This issue could be because of adding SSL?
Sometime it's open but after refreshing give HTTP ERROR 500!
My sitemap_topics_per_page value is 1000, after I change the number, I can visit the sitemap.xml, but after refreshing give HTTP ERROR 500!
@discoursehosting you can't see the error at ur side? try to use the latest version of discourse, few of us already having this issue.
Fixed it. In Rails 5 render :text is apparently deprecated.
The error only occurred when serving sitemap from cache, that is why it worked exactly once every time.
@discoursehosting Thank you very much, after updating works nicely ;)
@discoursehosting does it support https?
I have installed SSL on server, but in sitemap all links are through http!
Enable site setting force_https ...
@discoursehosting thank you very much!
can repoen this issue? doesn't work on non ssl site. Someone replied it's due to Rails 5 issue. This issue is not solved yet.
What does "doesn't work" mean? Any errors you're getting?
@discoursehosting 500 error. I think it's the same error with @DongHongfei , it worked on the first time then 500 error after I refresh. Btw, I don't use ssl.
Are you sure you have updated the plugin? This looks like the former bug. There is no SSL dependency here.
update the plugin? I just installed it yesterday, currently the plugin is at version 1.1. Do I have to update after I install it?
That depends whether you installed it before or after the fixes. Please update it and retest.
I did rake plugin:update plugin=discourse-sitemap and ./launcher rebuild app, it's working now.
|
GITHUB_ARCHIVE
|
If you have been working on the server and installing the application you may have encountered The specified directory service attribute or value does not exist error. Apart from the above scenario, users have experienced this error in various scenarios, like working with directory service objects while using ASP.NET. Also, if you have been trying to create a point mirroring program, creating Active Directory Object activity, and using the Microsoft BitLocker Administration and Monitoring application, then also this error appears. With this guide, we will try to provide some of the working troubleshooting methods that will help fix this error.
Causes of The Specified Directory Service Attribute or Value Does Not Exist Error:
Now in order to fix this issue, we must know some causes behind this error. While trying to research this error, we have found out some common causes of this issue. If you are using the windows services like Microsoft BitLocker Administration and Monitoring and operating it with the local user account, you may get the error. Furthermore, missing permission while working with ASP.NET application, outdated applications are some of the common causes of this issue.
Similar Types of The Specified Directory Service Attribute or Value Does Not Exist Error:
- Windows cannot delete object.
How to Fix The Specified Directory Service Attribute or Value Does Not Exist
If you want to fix the Error following are some of the working resolutions that you must try.
1. Basic Troubleshooting Points –
We suggest you to kindly go through these important points, before directly using the methods. In some cases, these common problems were the main cause of okta active directory The Specified Directory Service Attribute or Value Does Not Exist asp net issue.
- User has Permissions on the Domain: If you are submitting credentials to Active Directory vis ASP.NET and getting the error, its because you are not able to retrieve LDAP (NativeObject) property for authentication, so make sure that user has permissions on the domain.
- Install Latest MBAM: If you are getting this error while using Microsoft BitLocker Administration and Monitoring (MBAM), kindly install the latest version.
- Add Temporary Username & Password: If you are getting this error while using directory services via ASP.NET, maybe the accounts being used to connect to AD are different. So In the DirectoryEntry constructor, add a temporary username and password.
2. Fixing Issues when using ASP.NET (When Installing Applications) –
Okta active directory The Specified Directory Service Attribute or Value Does Not Exist asp net Error is majorly seen while ASP.NET. If you have gone through the above important points and everything is good and still getting the error. Follow this method.
- First thing you need to do is to crosscheck the IIS application pool account.
- Crosscheck whether read access to the file system is on, on the IIS application pool acct/ASP.NET
- Make sure that the directory where the apps are located have the same rights as the IIS and file system
3. Fixing Issue when Passing Values & Creating AD Objects Activity –
If you are experiencing this issue when using the active directory, follow the steps resolve okta active directory The Specified Directory Service Attribute or Value Does Not Exist asp net issue. Users have reported that when creating an object in AD and passing values to them, they are experiencing errors.
- STEP 1. To resolve this issue, we have to remove the unsupported object from the ObjectData field follow the step now.
- STEP 2. Open the command prompt with administrator privileges
**NOTE: With the following command we will first see all the properties.
Get-ADObject -Properties * -Filter "samaccountname -like 'CN'"
**NOTE: Here CN = Canonical Name
- STEP 3. Now from ObjectDataField remove all the unsupported objects to fix the issue
4. Update the Application –
If you are still getting the error, maybe your application is outdated. Users had reported that when they updated their application, the error was eliminated by itself. So download the latest copy of the program of software that you are using to resolve the issue.
In this troubleshooting, we have seen four approaches to resolve The Specified Directory Service Attribute or Value Does Not Exist error. We have considered all the major scenarios where this error occurs. Furthermore, we have also given brief information regarding the causes of this issue.
We hope by following this article, your issue has been fixed. However, if you again face any error or problem in the future, you can share in the comments. Make sure to follow us. Thank you!
|
OPCFW_CODE
|
One of the things I occasionally enjoy doing in my spare time, odd though it may sound, is playing with math software to make it create cool visuals. I've had a couple of results I've been particularly happy with, so I thought I'd share them with the world.
The first is an animated graphic of a particular concept in vector calculus. The idea is that you have a curve in space, and at any given point you can define three orthogonal vectors with respect to the curve. The first is tangent to the curve (it points along the curve); the second is the normal vector which points in the direction of greatest curvature; and the third is the binormal which is perpendicular to the first two. In terms of physics concepts, if you think of the space curve as the path along which an object is travelling, the tangent vector is in the direction of its velocity (and tangential acceleration), the normal vector is in the direction of its centripetal acceleration, and the binormal vector is, I suppose, just a convenient normal vector to identify the plane in which the object is travelling at a given instant. In this picture, the green vector is the tangent, blue is the normal, and red is the binormal.
The second animation I have for you is an illustration of what's called a parametric surface. The equation for this surface is rather ugly and complex, but the surface itself is quite beautiful. I have it rotating to give you a complete visual of it. This is an example of math-as-art.
Both of these were done in Maple, which is my personal preferred software for math-art. However, you can do similarly cool things in not only other proprietary software like Mathematica, but also with free (in all senses of the term) software like Maxima, although I don't know the extent to which any free software does animations.
The thing I like most about Maple is that you can talk to it almost entirely in standard math notation, with a few (relatively intuitive) text commands for things like plotting and animating. What I like least about it is that it's a horrendous memory hog and a bit unstable on your standard PC. However, my laptop runs it quite nicely on 64-bit Linux, despite having been entirely incapable of running it under Windows, so it's possible that it may simply have Windows issues.
I rather dislike Mathematica's interface, but there are people who swear by it. As far as Maxima, if you're the sort of person who finds Matlab and command-line Linux easy to deal with, then Maxima is the package for you.
Regardless, however, I do recommend playing with some 3-d graphing-capable software if you're currently a math student (or if you last took math back when slide rules were in vogue); the coolness factor of today's software is really high in the graphics department, and these programs can do some really amazing symbolic math work too.
|
OPCFW_CODE
|
Our upcoming Loopring 3.6 release has been undergoing an internal audit this month. In the upcoming month, we will start audits with external partners. We further improved our circuits and contracts’ efficiency with a significantly reduced cost per transaction as a result. We will continue measuring different workloads to make sure everything works as efficiently as expected.
We are also finalizing a feature of Loopring 3.6 that we have not yet talked about: AMM. Loopring 3.6 will support both orderbook-based and AMM-based trading on layer-2 and allow settlement between regular orders and AMM pools directly. We will share more details about this much-requested feature in the coming weeks.
For more info on Loopring 3.6, check out this presentation (slides or video) from EDCON. [AMM is not discussed therein, as it is more recent.]
We edited the liquidity mining dashboard on Loopring.io to display realtime stats for our liquidity mining campaigns, now showing reward pool, duration, spread requirement, etc. Currently, liquidity mining campaigns are running for the USDT/DAI, USDC/USDT, GRG/ETH, and BZRX/ETH trading pairs. We will add another one this week.
The Loopring Exchange now gives maker orders a rebate on every order filled: 8% of the taker trading fee is given to makers. Also, the taker fee schedule has changed, and instead of ranging from 0.06% to 0.1%, taker fees now range a bit higher, from 0.06% to 0.2%. So, given the default taker fee of 0.2%, 1.6 bps (0.2% * 8%) is given to makers on their volume. This means makers now have negative fees (earning fees to trade). You can see maker rebates accrue daily on your account’s dashboard. [Note, there are no additional gas fees to trades, the fees are all-in.]
Loopring Exchange now supports ‘post-only’ orders. Meaning market makers can ensure their limit orders are never takers, only makers (will only earn rebates, never pay a fee).
We have also integrated MyEtherWallet support; now, MEW mobile wallet users can use their wallets on Loopring.io using MEWconnect. Thanks to the MEW team for creating that integration. Reminder, if you want to see your wallet (or any other integration/change to Loopring.io), it is open source, so feel free to check out the repo and make a PR.
We listed several new trading pairs, including YFI-USDT and CRV-USDT, ETH-USDC, and ETH-renBTC.
After receiving Solidified’s audit report for our smart wallet contracts (version 1.0), we conducted a few optimizations that we believe will significantly reduce wallet creation cost and transaction fees. The new meta-transaction module is also more intuitive and elegant from a design perspective. Solidified subsequently delivered a second audit report for Loopring Wallet version 1.1. The report is available at:
We have upgraded our mobile app to support the new wallet contracts. For those (very few) who have access to our alpha version, we urge you to upgrade at your earliest convenience. In September, we will invite more users to our beta testing program.
Our designer is working very hard on the new UX and UI. It will take three to four more weeks to finish the entire design. We look forward to delivering the polished app to our community as soon as possible.
Most of our relayer engineers participated in Loopring 3.6’s internal audit to ensure the new design and implementation are relayer-friendly. The team will start upgrading the relayer implementation to support 3.6 this week. We expect the work (without the AMM support) to be finished by the end of September.
We have replaced Kafka with our in-house message queue implementation that allows better parallelization of message processing. This change fixed a known issue related to failed order cancellation. We have also improved the prover running on Google Cloud Platform.
We released lots of new info on Loopring 3.6 when Daniel Wang, Loopring CEO, presented at EDCON. Can see a post and slides here, or watch the presentation:
[On that note, be sure to subscribe to our Youtube channel, where lots more of this type of educational content is hosted.]
Daniel also spoke on a panel at EDCON with Vitalik, Sergey Nazarov, and others.
Throughout the month, we sponsored our first ever podcast, Bankless. We offered the nation the code, BANKLESS, allowing VIP4 tier (lowest fees) for 6 months. [Still valid, so onboard with it at Loopring.io!]
We partnered with Coingecko to offer some of their loyal users (and ‘candy’ collectors) a VIP starter kit for Loopring.io. The 200 promo codes, which included an ETH gas subsidy for creating an L2 account, got gobbled up within the first day!
We wrote an in-depth liquidity mining guide for Loopring Exchange on Defiprime’s Alpha forum.
We enabled transfer support (Loopring Pay) for a dozen new tokens, including BAT, BAL, SNT, and more. If you’re sending lots of transfers in these crazy gas times, you should be using layer 2! Let us know if you’d like your token to be supported.
We broke down the math and gas cost considerations about onboarding onto Loopring’s zkRollup for trades and transfers vs staying on L1. TLDR: even with an account creation + deposit fee, you breakeven gas-wise with 1–2 DEX trades or 6–18 transfers.
Hummingbot opened up voting for which exchange connector should be added to their code base. Loopring.io is 1 of 3 contenders. Being included means accessing hundreds of market makers and programmatic traders, exactly what a high-performance orderbook DEX like Loopring needs, so please vote in their Discord before September 6!
In The News and On The Ground
A few more zkRollup evangelism efforts (1, 2) from Vitalik, hoping to help alleviate the crazy (and scary) gas prices that popped up this month.
Matthew Finestone, Loopring’s Head of Business, spoke on DEXs and liquidity at the Global DeFi Summit.
Matthew also did a live video AMA with Defiprime.
Loopring Exchange’s liquidity mining featured in Delphi Digital’s report.
Loopring Protocol was analyzed by Formal Verification’s ‘In The Week’.
DeFi Dad made a great Loopring.io video tutorial for Bankless, ‘3 reasons you should be trading on Loopring DEX’.
Matthew sat on a panel at Chainlink’s SmartCon virtual conference, with Camila Russo and bZX and Balancer teams, on the topic of fundraising and token listings in DeFi.
Loopring is a protocol for scalable, secure exchanges & payments on Ethereum using zkRollup. You can sign up for our Monthly Update, learn more at Loopring.org, or check out an exchange/payment app at Loopring.io.
Twitter ⭑ Discord ⭑ Reddit ⭑ Telegram ⭑ GitHub ⭑ Docs ⭑ YouTube
|
OPCFW_CODE
|
Add log scale to y-axis
Description
Adds a log scale to the y-axis of charts by introducing a logScale prop (boolean).
This does not include a log scale for the x-axis.
Charts this has been added to:
Line
Area
Bar
Scatter
Bubble
Mixed Charts
Charts this has not been added to:
Histogram
Possible Additions
Error handling for the following situations:
Negative values (currently, chart will appear blank when log scale is turned on and negative values are present)
100% stacked charts (produces a strange-looking chart)
Example
<LineChart data={growth} x=month y=value title="Normal Scale"/>
<LineChart data={growth} x=month y=value logScale=true title="Log Scale"/>
Major Questions
Should logScale prop be specific to an axis (ylogScale)?
I lean towards yes if we plan to offer either of the following:
x-axis log scale (xLogScale)
Secondary y-axis, where either axis or both could have a log scale (y2logScale)
Should this prop be framed as an "axis type" rather than a boolean
E.g., yType=log, xType=log, y2Type=log
Checklist
[x] For UI or styling changes, I have added a screenshot or gif showing before & after
[x] I have added a changeset
After looking at the secondary y-axis PR again (#874), I think we will need to split this out by axis.
so the prop for now would be “yLogScale”, with “y2LogScale” to be added along with secondary axis
Definitely want to be able to log independently
You could probably call it just yLog and y2Log
thinking: should you be able to optionally control the base of the log here?
like rather than show 1, 10, 100, maybe i want to show 2,4,8,16 etc on the axes. Obviously 10 is the default. but in computing / scientific applications 2 or 8 or e bases are not uncommon
I think echarts has a logBase option.
@archiewood have added your suggestions: prop is now yLog and have added yLogBase which defaults to 10, but can be overridden.
Going to skip the x-axis log scale for now as it's slightly more involved. Will be easy enough to add in the future
This is perhaps a side note, but certain types of charts seem poorly suited to a log scale.
Notably:
anything showing a start from zero (barcharts), since log(0) is undefined
especially anything stacked (eg stacked bar, area), since it's hard to interpret what the values mean, and relative sizes are distorted between series.
Perhaps log scales only really make sense for Line and Scatter Charts. Should be protect people from accidental traps here, or let them do what they want (there might be some valid cases I can't think of).
This explains my thoughts better: https://www.graphpad.com/support/faq/graph-tip-dont-use-a-log-scale-on-a-bar-graph/
I agree in general. I think we should not make it available on stacked charts. I’ll add some error handling for that.
I can see a scenario where you’d use it on a single series bar chart - e.g., plotting Covid cases with a bar per day. I think same for a single series area chart.
Looking pretty good. All the min max and base stuff works as expected.
Couple of edge cases we should handle:
Negative values / zero values - these should throw
If you don't specify a y, should you be able to yLog? This error needs to be clearer
Is this as expected?
@archiewood thanks.
That makes sense. I think we can easily access the min values in the y columns, so hopefully not to difficult to add that
That's as expected since the y value will be filled in automatically. In this case, there are multiple y columns getting auto-assigned to y, so it's reading that as a multi-series stacked chart and throwing the error.
I'm not sure what's going on here - I can't tell why only one of the series is being displayed
Ok going to call it here. Some edge cases still exist, but I think they are quite small %s.
|
GITHUB_ARCHIVE
|
PhantomJS timed out
Hello,
I am following a setup from this example: http://seesparkbox.com/foundry/grunt_automated_testing
and I've tried multiple versions of phantom.js but ineviteably get stuck with this timeout problem.
It is actually not clear to me where i'd set the url for phantom to call. someone suggested qunit on another post, but i am not using a qunit. and where i could set the proxy (even though it's calling localhost).. but it may be related to it trying to call non localhost address, since i didn't actually set it anywhere?
Here is the error i see:
$ grunt test -dd
[D] ["phantomjs","onResourceRequested" ..........
[D] ["phantomjs","fail.timeout"]
Warning: PhantomJS timed out, possibly due to an unfinished async spec. Use --force to continue.
adding my gruntfile.js
module.exports = function (grunt) {
grunt.initConfig({
pkg: grunt.file.readJSON("package.json"),
watch: {
grunt: {
files: ["Gruntfile.js", "package.json"],
tasks: "default"
},
javascript: {
files: ["app/**/*.js", "app/**/**/*.js", "tests/spec/*Spec.js", "*.jshintrc"],
tasks: "test"
}
},
jasmine: {
src: "tests/lib/jasmine-1.0.0/*.js",
options: {
specs: "tests/spec/*Spec.js",
// host: "http://localhost:62310/tests/specrunner.html",
template: require("grunt-template-jasmine-requirejs"),
'--web-security': false,
'--local-to-remote-url-access': true,
'--ignore-ssl-errors': true
}
},
jshint: {
all: [
"Gruntfile.js","app/**/*.js", "app/**/**/*.js"
],
options: {
jshintrc: "options.jshintrc"
}
},
mochacli: {
options: {reporter: "nyan", ui: "tdd"},
all: ["tests/spec/*Spec.js"]
}
,
qunit: {
all: {
options: {
urls: [
'http://localhost:62310/tests/specrunner.html'
]
}
}
}
});
grunt.loadNpmTasks('grunt-template-jasmine-requirejs');
grunt.loadNpmTasks("grunt-contrib-watch");
grunt.loadNpmTasks("grunt-contrib-jshint");
grunt.loadNpmTasks("grunt-contrib-jasmine");
grunt.loadNpmTasks("grunt-mocha-cli");
grunt.registerTask("test", [
"jshint"
, "jasmine"
//,"mochacli"
]);
grunt.registerTask("default", ["test"]);
};
Are you still encountering this issue ?
From what I can tell, you didn't setup a local web server, because you have urls in your gruntfile. Note that urls must be served by a web server, and since jasmine task doesn't contain a web server, one will need to be configured separately. The grunt-contrib-connect plugin provides a basic web server.
Going to close this issue, feel free to ask if you have followup questions.
|
GITHUB_ARCHIVE
|
Learn More About Web Coding
What exactly is web coding? It is the work done to create a website, from a plain text static page to a complex web application or electronic business. It also includes the creation of social networking services. In other words, web development is a broad field, involving a variety of different technologies. To learn more about web coding, keep reading. This article will give you a broad overview of web coding, as well as provide some tips and tricks for getting started.
HTML is a type of coding language that tells web browsers what different parts of a website should look like. You use HTML to create headers, links, and paragraphs. There are also HTML tags that help you set up images, which can help create an appealing and informative website. But what is the point of all this? Ultimately, web coding is all about making your website compatible with as many different types of users as possible.
In order to learn how to code a website, a web developer must be familiar with a variety of languages and types of HTML. Without this knowledge, it can be difficult to customize a website. To help yourself learn more about web coding, research website templates and look through their source codes. While web coding can be hard, there are many free resources available online to learn the fundamentals of the field. The first step is to find out what other people are doing with the same tools. If you are not comfortable with this level, you may want to hire a web developer.
HTML is used to create the skeleton of any web application. HTML is a special language that tells web browsers what each piece of content should look like. Using HTML allows you to define things like headings, links, paragraphs, and images, among others. It’s easy to become confused with the terminology of HTML, so make sure you read everything carefully before you start coding a website. The more you learn, the more efficient your website will be!
It’s a common misconception that web programming is hard. In reality, 99% of web applications are terrible. But that doesn’t mean you should stop learning! Learning to code web applications is a rewarding career path, and with the right training, you can reach audiences and become more confident with your skills. You can even earn a degree in web coding by recasting your software into web applications. These skills can help you create more interesting and useful products.
In the past, web developers would write code offline and send it online once the work was complete. This was a time when internet speed was slow. But the advent of online IDE editors helped to remedy this situation, prompting the movement from offline to online coding. It’s worth considering this change in mindset if you’re a web developer and are interested in building a successful business. When you’re ready to start coding, consider these tips to make your website a success.
|
OPCFW_CODE
|
from geneticknot.Board import Board
from geneticknot.DNN import DNN
def play_agents_tictactoe(players_couple, board_shape):
player_1 = players_couple[0]
player_2 = players_couple[1]
board = Board(board_shape)
status = -1
while status == -1:
player1_moves = player_1.forward(board.board.flatten())
player1_moves = player1_moves.argsort()[:][::-1]
board.put_move(moves=player1_moves, player_num=1)
status = board.getWinner()
if status != -1:
break
player2_moves = player_2.forward(board.get_board_for_player2().flatten())
player2_moves = player2_moves.argsort()[:][::-1]
board.put_move(moves=player2_moves, player_num=2)
status = board.getWinner()
if status != -1:
break
if status == 1:
player_1.wins += 1
player_2.loses += 1
elif status == 2:
player_2.wins += 1
player_1.loses += 1
elif status == 0:
player_1.draw += 1
player_2.draw += 1
def play_with_ai(board_shape):
dnn = DNN(9, 9, [2, 2])
dnn.load_network()
board = Board(board_shape)
while board.getWinner() == -1:
ai_moves = dnn.forward(board.board.flatten())
ai_moves = ai_moves.argsort()[:][::-1]
board.put_move(moves=ai_moves, player_num=1)
board.print_board()
userInput = [int(input("Enter move"))]
board.put_move(userInput, player_num=2)
|
STACK_EDU
|
Can't use SSH with Powershell
Environment
Platform ServicePack Version VersionString
-------- ----------- ------- -------------
Win32NT 10.0.18363.0 Microsoft Windows NT 10.0.18363.0
Windows Terminal version (if applicable): 0.9.433.0
Steps to reproduce
Open Windows Terminal Preview
Run ssh
Expected behavior
usage: ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface]
[-b bind_address] [-c cipher_spec] [-D [bind_address:]port]
[-E log_file] [-e escape_char] [-F configfile] [-I pkcs11]
[-i identity_file] [-J [user@]host[:port]] [-L address]
[-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
[-Q query_option] [-R address] [-S ctl_path] [-W host:port]
[-w local_tun[:remote_tun]] destination [command]
Actual behavior
ssh : The term 'ssh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ ssh
+ ~~~
+ CategoryInfo : ObjectNotFound: (ssh:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Additional Information
Something that's interesting, is that I can't even view the OpenSSH directory...
PS C:\Users\kuzi-moto> Get-Item "C:\Windows\System32\OpenSSH"
Get-Item : Cannot find path 'C:\Windows\System32\OpenSSH' because it does not exist.
At line:1 char:1
+ Get-Item "C:\Windows\System32\OpenSSH"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (C:\Windows\System32\OpenSSH:String) [Get-Item], ItemNotFoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetItemCommand
But when using a normal PowerShell prompt, works just fine:
PS C:\Users\kuzi-moto> Get-Item "C:\Windows\System32\OpenSSH"
Directory: C:\Windows\System32
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 3/19/2019 1:21 AM OpenSSH
windows settings > apps > apps & features > optional features > add a feature > openssh client
windows settings > apps > apps & features > optional features > add a feature > openssh client
Already installed!
This sounds a lot like x86/x64/System32/SysWOW trickery. @kuzi-moto do you know what architecture you're running on your PC?
This sounds a lot like how when I run Terminal from VS after building it, it can't find WSL on the path (because it's not 32-bit), but running it normally, WSL works fine.
Are you running something like:
C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe
or
C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe
This sounds a lot like x86/x64/System32/SysWOW trickery. @kuzi-moto do you know what architecture you're running on your PC?
PS C:\Users\kuzi-moto> $ENV:PROCESSOR_ARCHITECTURE
AMD64
This sounds a lot like how when I run Terminal from VS after building it, it can't find WSL on the path (because it's not 32-bit), but running it normally, WSL works fine.
Are you running something like:
C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe
or
C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe
Looks like my standard PowerShell prompt runs from %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe. I opened the one from the SysWOW64 directory, and it is has the same issue with SSH as well.
That's really unusual. From inside WT, can you also share $env:PATH?
That's really unusual. From inside WT, can you also share $env:PATH?
PS C:\Users\kuzi-moto> $env:PATH
C:\Python38\Scripts\;C:\Python38\;C:\Python37\Scripts\;C:\Python37\;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\iCLS\;C:\Program Files\Intel\Intel(R) Management Engine Components\iCLS\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\ProgramData\chocolatey\bin;C:\Program Files\dotnet\;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\GNU\GnuPG\pub;C:\Program Files\Symantec.cloud\PlatformAgent\;C:\tools\php73;C:\ProgramData\ComposerSetup\bin;C:\ProgramData\ngrok-stable-windows-amd64;C:\tools\php74;C:\Program Files\PuTTY\;C:\Program Files\Docker\Docker\resources\bin;C:\ProgramData\DockerDesktop\version-bin;C:\Program Files\Git\cmd;C:\Go\bin;C:\Program Files\nodejs\;C:\Program Files (x86)\Microsoft SQL Server\150\DTS\Binn\;C:\Users\kuzi-moto\AppData\Local\Microsoft\WindowsApps;C:\Users\kuzi-moto\AppData\Local\Programs\Microsoft VS Code\bin;C:\tools\mysql\current\bin;C:\Users\kuzi-moto\AppData\Roaming\Composer\vendor\bin;C:\Users\kuzi-moto\AppData\Local\Box\Box Edit\;C:\Users\kuzi-moto\go\bin;C:\Users\kuzi-moto\AppData\Roaming\npm
C:\Windows\system32\openssh is definitely in there.
If you run it from WT as c:\windows\sysnative\openssh\ssh.exe, does it work? (key being sysnative)
C:\Windows\system32\openssh is definitely in there.
If you run it from WT as c:\windows\sysnative\openssh\ssh.exe, does it work? (key being sysnative)
Yes, that does appear to work.
This is fascinating. Everything on your system is acting as though you're in WoW/using powershell x86 on an x64 machine/possibly on an ARM64 machine where it doesn't ship arm64 powershell by default.
Did you install Terminal from the store, or from our downloads page?
Can you share your profiles.json and the defaults.json from your terminal distribution?
Would you also mind sharing the output of Get-AppxPackage Microsoft.WindowsTerminal*?
Sorry for all the questions!
Did you install Terminal from the store, or from our downloads page?
I initially installed it via chocolatey a while ago. On seeing these issues I uninstalled it, then re-installed via the store.
Can you share your profiles.json and the defaults.json from your terminal distribution?
profiles.json:
So, I've been thinking about this for a while, and I just can't dig up a single idea as to why it's happening.
Would you mind running Process Monitor?
If you create a filter for Process name is powershell.exe Include (screenshot below), and then Event Class is File System Include, it'll be a pretty small trace.
filter
files only
ssh filter, possibly:
I'd love to see what powershell is doing when you try to run ssh!
When I run it, I get this:
it shows powershell searching for ssh in $PATH :smile:
I'm not sure what's wrong, but I get a bit of 2,000 events when running ssh with the suggested filters.
Here is a screenshot of the filters:
I have uploaded a spreadsheet of the results for your perusal: Logfile.xlsx
If you add this filter, it should cut down on the noise significantly:
You're right, now there are only 92 events that match.
Logfile.xlsx
Alright! This is crazy, but powershell is only searching the 32-bit system root.
C:\Windows\SysWOW64\ssh.*
C:\Windows\ssh.*
C:\Windows\SysWOW64\wbem\ssh.*
C:\Windows\SysWOW64\WindowsPowerShell\v1.0\ssh.*
C:\Windows\SysWOW64\OpenSSH
I've never heard of such a widespread failure of wow64... huh.
Hmm, so what seems to be the best way forward? It must be some sort of an issue though, since a standalone Powershell prompt seems to work fine, see attached trace: Logfile.xlsx
Alright, how about this set of filters:
when you launch a new powershell tab with WT, it should spit out some events.
I've got really great and terrible news for you. You have a stray copy of powershell.exe, 32-bit, sitting in C:\Windows. That's not really a normal or expected location for PowerShell to live. . .
Closing as a question -- root caused to powershell living somewhere it shouldn't, still on $PATH. Fix by identifying why that powershell's there (and removing it) or changing the commandline of Windows PowerShell so that it points to the one living in System32\WindowsPowerShell\v1.0 :smile:
Hey @kuzi-moto -- did you ever end up figuring out where that copy of PowerShell came from? :smile:
Hey @kuzi-moto -- did you ever end up figuring out where that copy of PowerShell came from? 😄
Actually I never did! Couldn't figure out how it got there, so I think I just deleted it and all was well.
Excellent.
I thought of you because of #6684 -- I can't believe it took me literally six months to think of this solution ;P
Well, hindsight is always 20/20! I'm just glad I got it working, been happy using it this far.
Add C:\Program Files\Git\usr\bin to PATH system variable
Probably don't do that, just use the system's built-in SSH client once you figure out which architecture you're on.
|
GITHUB_ARCHIVE
|
I and some other people on Replit are reporting copied repls and no one will do anything about it. Someone copied my code and is botting likes and runs on their repl that is copied from me and it is still up on the trending page a week after I reported it. Before you close this topic, I want to know why you are so eager to take down people who aren’t doing anything wrong, yet when someone is doing crimes against the coding community, no one will take action.
You can’t do anything about it, unless you have a license stating people can’t steal your code.
You have no right to ask the repl to be taken down on replit … I do believe you can ask for accreditation, but a lot depends in if you put a license file in the repl
You can’t do anything about it, unless you have a license stating people can’t steal your code. (@Sky said), Also if you don’t have a license (for any legal reasons) then you can just buy Hacker/Pro plan
The license is a good idea. I’ve seen Repls that use them. For example, Bookie0’s
bounceCSS uses some creative commons licensing so that if you modify a fork you must give credit.
You can find a list of used/popular licenses in Licenses – Open Source Initiative
but public repls are MIT licensed so they should take copies down if they don’t accredit the author
MIT does not include accreditation but usage of MIT back
I currently have my project under an AGPLv3 Licence. This means that the person MUST give credit to me for the source code AND must comply with any other terms that I set for the source code being used by other parties.
If you have stated the licence then yes. But I have the feeling replit does not care and somewhere it might be written in the tos every thing is MIT. Maybe worth a check
There are two problems here, botting, and copying. Can we at least fix trending first? Then copied repls wouldn’t rise up and get so much (real and fake) attention.
Exactly! The only quality repls that get on trending are from big creators. Everything else is low-quality, copied work, that has botted likes & runs.
Honestly the problem is with security, they need to add a signature to the eps (TikTok does the exact same thing).
I have, in fact, stated the license. It is in the files. Are you saying I need to put that in the disc as well?
you need a license file, but we better check replit ToS as it might state all projects are MIT meaning that whatever the replit includes is irrelevant
I thought UMARismyname already said that they were:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Please point out where MIT states attribution.
Ok, first of all, this was not the point of the thread. Second, if you read carefully above that, it says, “Content you create in a public Repl or in Teams for Friends is automatically subject to an MIT license.” This means that it automatically applied to all public repls. What this DOESN’T mean is that it is the ONLY license that can apply. For example, The user might be able to have their own license ie. an AGPLv3 Licence and have that for their repl.
The point of the thread is to ask for copies to be taken down. To do so, there must be legal ground and stating MIT is the legal ground is incorrect.
The ToS is unclear and because of its ambiguity about being MIT the overriding license of public repl or not, it means if replit takes something down because of license issues they could be sued.
The summary is, if you care about license, do not put in public and share via github with a LICENCE file in it.
Hi, I’m happy to offer some clarification here. As others have mentioned above, the MIT license is automatically applied to public Repls, and this is a very permissive open source license which allows others to fork and remix.
Q: How can I stop someone from forking my code?
A: The best solution is to make your Repl private. A private Repl can be licensed under any license that you wish, and the best way to indicate that is to include a license file with the code. If you don’t include a license file in your private Repl, then by default you are retaining all rights and copying is not allowed at all. If someone forks a private Repl and makes it public in violation of its license, you can report this and we will delete the public fork.
Q: What license should I choose in order to ensure that my code retains its attribution and isn’t relicensed under MIT?
A: There are a lot of options here. The two main options are proprietary or open source. A proprietary license generally prohibits copying at all. An open source license generally allows copying, but may include conditions such as retaining attribution and keeping forks under the same license. We can’t recommend a specific open source license because there are so many, but you can check out some options at Licenses – Open Source Initiative.
Q: What if private code, or code that is already licensed under a more restrictive license, (eg. GPL) is uploaded to Replit?
A: If you wrote that code, then you are its owner, so by uploading it to Replit you are dual-licensing it so that it is available under the MIT license also. If you did not write the code, then most likely you are not its owner, and you could be infringing copyright by uploading it under the MIT license. The copyright owner could then send Replit a DMCA notice, and we would take the code down.
Q: Is there any other way to get my code taken down?
A: If a public Repl doesn’t otherwise violate Replit’s Terms of Service, we won’t delete it or warn a user for forking it. However, if they remove your attribution from your original code and present it as their own, this amounts to plagiarism and we will unpublish it from Community at your request. The code will still be available on the user’s account, but it won’t be able to trend.
If you have any more questions, please follow up and I’ll be happy to try to answer. I hope this helps!
|
OPCFW_CODE
|
The question here is if others find that sometimes they cannot conjure up the field they are after because there are so many closely named. Is it me or is this a common finding?
Specifically I am often after Plant association or Name of associated plant. Most time the drop down menu that I see for fields suggest some of my most commonly used fields (great!). Occasionally this seems to corrupt and I end up with a list of suggested fields that don’t give me what I am after so then my next move is to type in the desired field - however because there are so many associated fields, what I want does not show up. I believe if I reload the page, sometimes my old list will show up.
This has been of particular frustration when trying to record a one take tutorial video of using the field and then it is not there and I can’t simply get it back and I have to start from scratch.
Is this just my experience?
Do you need to see examples?
Can one just put in the specific field “slug”? If not, is a feature request by me needed for this?
Examples would be helpful.
@tiwane Sorry about the delay in responding here. I found that I could not use my usual browser (Firefox) to take a screenshot of the Observation Fields drop down box - it kept disappearing. I finally found the time to reliably repeat my experience and take a screenshot using Chrome.
Usually I get the below drop down which includes my desired field, Plant association
On occasion I will get something like the following that does not have my desired field
When I type in the field I want, I get the following responses
My feeling was that if I could put in the field slug, 10524, from https://www.inaturalist.org/observation_fields/10524, the slug would allow me a more accurate option.
Ahem, shuffles feet,…hmmm?..no input?..
Sorry about that.
Just to clarify, when you click the Observation Fields field, it shows you the 10 fields associated with the most recent Observation Fields you added. If for example that
Name of Associated Plant field was the most recent one you added, and you only added one
Name of Associated Plant field, and you deleted that observation field, it would disappear from the list since it’s no longer associated with an existing obs field
I’m not able to replicate this:
I’m able to see
Plant association as an option every time I type it in, either at the top or second.
What’s the URL of a page where you see this happening?
This is the particular URL I was working with. https://www.inaturalist.org/observations/177590859
I was trying to do a tutorial video for a group I’m with that had me doing a few retakes because I was fumbling on what I was saying. So that had me deleting the field and restarting. What you are saying makes sense, Maybe the system was trying hard to not have my choice come up again by having other choices offered instead.
This can be a real problem. I joined multiple projects related to pollinator/plant associations. Turned out two or more had associated fields named “Nectar Plant” and others had very similar names. These were not the same field, as far as I can tell – two or more “Nectar Plant” fields would come up in my list. Too confusing. I reduced the number of projects to one (Pollinator Associations) and by now I have only one field name coming up on the list of 10 most frequently used. Is it the field actually associated with the project Pollinator Associations? I have no idea.
Aah. So it is not just me (but I don’t know if just two is enough). This is where I think having the additional ability to put in the slug would help. I know that one can look up the projects that are using certain fields under said fields ie https://www.inaturalist.org/observation_fields/498 but this is cumbersome to do on the go.
Are all fields part of projects? I’ve been using the fields that seem to add value, but was not aware that I might thereby be impinging on a project.
I would say no.
There are some fields that are not part of a project. Like this one: https://www.inaturalist.org/observation_fields/4953
or this https://www.inaturalist.org/observation_fields/11442
And then there are some fields that are used in many projects such as https://www.inaturalist.org/observation_fields/1685 which is used in 9 projects.
One can explore the field from an observation by right clicking on the field and picking the bottom choice in the drop down box, Observation field details
If one goes to the bottom of that page, it will show a list of projects using the field
|
OPCFW_CODE
|
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
i have been spoilt by Windows, so i thought i would stand up for myself and brave the world of LINUX (mandrake 9.1), i installed it at the weekend and for my initial impressions I was very impressed indeed.
Of course i am beginning to miss, or cant dam well find a lot of the goodies that Windows has spoilt me with.
I have happily found an alternative to MSN Messenger in GAIM,
I have also got my USB printer working well.
But what about these queries ......
Is there an alternative to KAZAA ???? for downloading music files (for example) ?.
is there an alternative to Windows Media Player ?
how do i download photos from my digital camera (usb again).
can i use my CD Writer to Burn Music CD's etc.
finally (for now) can I view my files in myWindows folder, which is NTFS !!!
1) there are hundreds of alternatives for kazaa browse throught this link
2) there are also tonnes of media players but i will stick with the most popular and suggest :: Mplayer
3) you will have to mount the camera as a usb-storage device and go to the mount point to retrieve the photos..
4)also again tonnes of software, but one of the most popular apps is k3b
you need to be ensure that your kernel has NTFS support enabled. I think that some distros have it disabled by default. Mandy may be one of them.
There are bound to be countless discussions on this board about checking your kenel options and how to add features that you want but dont have. Use the search button Also get to know your best buddies...
I too am a n00b (RH9 was my poision of choice)... so maby Im not one to offer too much insight but I can share some of what I have learned here...
As for P-2-P file sharing [i.e.Kazaa] there are options for example you can run a Win emulator (such as Wine) which should run something as simple as kazaa with out problem but Im not positive since I gave up Kazaa a while ago for IRC. Which brings me to option number 2...IRC... its a little tricky to use some times, and takes atleast one more brain cell to function through, but you gat better results and far less virus/bogus files and porn (unless you like the porn, virus isues and waiting for 2hrs to download something only to find its not what it was named as.. ) Now Im still in the search phase for an IRC client that will allow effective file transfer in linux... and i must admit, to no avail... there are also FTP options for sharing P/2/P....after all Linux shines as a server....
As for the option to windows media player.. there are too many options here actually... such as Xine... and I installed VLC (VideoLAN Client) which was very easy to get set up and so fars plays all my old movies (from avi, mpg mpeg to dat). THe only thing it wont play right out of the wood is WMV files which Im sure you can get some software to convert (im working on that now as well). There is a piece called Mplayer that is touted here... do a search on mplayer and you will find the links and set up info.
Using periferals such a cameras is a matter of having the correct software and yes there is a volume of options there as well .. my recomendation is go to www.google.com/linux and do a search.. you will find one that works im sure of it .. you might even find one that you actually like!
Same goes for the cd-rw ... its a matter of getting the sofware just like windows needs external software to drive most of the features (although there are some built in options in XP but nothing as good as you could buy)
You have to work a little harder for the software, but since most of it is still free (or damn near) it makes it all worth while.. the only thing here is no version of linux assumes your and idiot... like MS does.. and they dont keep you dependant on their product like MS does.. so you have to do the leg work! and find what suits your needs.
So far I havent found ANY thing my XP partition will do that my RH9 partition wont do.... and the only $ ive spent was on a book to get me started.
Start off by enering your camera cdr ect.. in google.com/linux and or here in the search area... and you can try http://freshmeat.net for some options as well as tucows .. there are alot of places to find what your looking for you just gotta get out there and look.
It is the MOTHER SUPERIOR of P2P clients . It works on the Direct Connect protocol, and you can find _anything_ you want. And I mean _ANYTHING_ bro. It truely rocks harder than any P2P client including ones (especially ones!) for windows, by far.
and gphoto2 is the thing to search for your camera. their site will tell you if it's supported in linux or not.
Please dont start the lassie thing again man, there are chicks here (some prefer to be called women! ) Use the search button dude.
Now I'm going to explain the NTFS thing vaugely and in laymans terms...
You have a kernel. It is what knows how to operate your computers hardware and software. It has options that can be turned on and off. NTFS support is one of those options. You can find out how to turn it on by searching this board. And by searching on google.
|
OPCFW_CODE
|
Tests failing on clean checkout of master
I just forked Nock and tried running the tests and I have 3 tests failing before making any changes. Since the Travis build is passing, it probably means that one of our dependencies broke us recently. Typically you would solve this by either pinning the older version or fixing the break and pinning the new version. I haven't had enough time yet to figure out how exactly it broke, though.
Here's my test log for the 3 failing tests. They are all consecutive.
# records and replays gzipped nocks correctly
ok 402 should be equal
ok 403 (unnamed assert)
ok 404 (unnamed assert)
ok 405 (unnamed assert)
ok 406 should be equal
not ok 407 should be equal
---
file: /Users/ken/ksheedlo/nock/node_modules/superagent/lib/node/index.js
line: 628
column: 30
stack:
- |
getCaller (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:439:17)
- |
assert (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:21:16)
- |
Function.equal (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:163:10)
- |
Test._testAssert [as equal] (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-test.js:87:16)
- |
/Users/ken/ksheedlo/nock/tests/test_recorder.js:341:7
- |
Request.callback (/Users/ken/ksheedlo/nock/node_modules/superagent/lib/node/index.js:628:30)
- |
Request.<anonymous> (/Users/ken/ksheedlo/nock/node_modules/superagent/lib/node/index.js:131:10)
- |
Request.emit (events.js:107:17)
- |
Stream.<anonymous> (/Users/ken/ksheedlo/nock/node_modules/superagent/lib/node/index.js:773:12)
- |
Stream.emit (events.js:129:20)
found: 3
wanted: 2
...
ok 408 should be equal
ok 409 should be equal
ok 410 should be equal
# records and replays gzipped nocks correctly when gzip is returned as a string
ok 411 should be equal
ok 412 (unnamed assert)
ok 413 (unnamed assert)
ok 414 should be equal
not ok 415 should be equal
---
file: /Users/ken/ksheedlo/nock/tests/test_recorder.js
line: 397
column: 7
stack:
- |
getCaller (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:439:17)
- |
assert (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:21:16)
- |
Function.equal (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:163:10)
- |
Test._testAssert [as equal] (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-test.js:87:16)
- |
Request.<anonymous> (/Users/ken/ksheedlo/nock/tests/test_recorder.js:397:7)
- |
Request.emit (events.js:110:17)
- |
Request.mixin._fireSuccess (/Users/ken/ksheedlo/nock/node_modules/restler/lib/restler.js:222:12)
- |
/Users/ken/ksheedlo/nock/node_modules/restler/lib/restler.js:158:20
- |
IncomingMessage.parsers.auto (/Users/ken/ksheedlo/nock/node_modules/restler/lib/restler.js:394:7)
- |
Request.mixin._encode (/Users/ken/ksheedlo/nock/node_modules/restler/lib/restler.js:195:29)
found: 3
wanted: 2
...
ok 416 should be equal
ok 417 should be equal
ok 418 should be equal
# records and replays nocks correctly
ok 419 should be equal
ok 420 (unnamed assert)
ok 421 (unnamed assert)
ok 422 (unnamed assert)
not ok 423 should be equal
---
file: /Users/ken/ksheedlo/nock/tests/test_recorder.js
line: 458
column: 7
stack:
- |
getCaller (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:439:17)
- |
assert (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:21:16)
- |
Function.equal (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-assert.js:163:10)
- |
Test._testAssert [as equal] (/Users/ken/ksheedlo/nock/node_modules/tap/lib/tap-test.js:87:16)
- |
Request._callback (/Users/ken/ksheedlo/nock/tests/test_recorder.js:458:7)
- |
Request.self.callback (/Users/ken/ksheedlo/nock/node_modules/request/request.js:373:22)
- |
Request.emit (events.js:110:17)
- |
Request.<anonymous> (/Users/ken/ksheedlo/nock/node_modules/request/request.js:1318:14)
- |
Request.emit (events.js:129:20)
- |
IncomingMessage.<anonymous> (/Users/ken/ksheedlo/nock/node_modules/request/request.js:1266:12)
found: 3
wanted: 2
...
ok 424 should be equal
ok 425 should be equal
Update: I pinned all the dependencies to the same versions as the latest passing build and my build is still failing. ¯_(ツ)_/¯
|
GITHUB_ARCHIVE
|
It’s been a while since I’ve posted. I’ve been working on my side project Agulus, my upcoming book Developer Career Book, my podcast Complete Developer Podcast, helping run Code Newbie Nashville, as well as a full time job and having a family. I’ve been doing a lot of writing of late, but not very much online.
As I’ve been trying to juggle all these things, technology has simply not been helping any more. In the past couple months, I’ve had the following experiences:
There are a lot of upsides to wordpress. It’s not terribly difficult to set up, relatively straightforward to administrate, and makes a lot of website use cases pretty simple. This is why I initially used the tool, as I just wanted to get started without too much ceremony. However, after a few years of using the tool, I started having a lot of issues. For one, Wordpress has got some significant security issues, as well as a large number of automated attacks on the platform. Nearly every day, I got at least half a dozen messages where particular IP addresses were being blocked for repeated unsuccessful login attempts. In addition, I had to spend a fair bit of time trying to tune my website to sort out performance issues, a number of which were actually the result of interaction with the MySql database server. Finally, I had frequently wanted to experiment with different ways to lay out the website, but it wasn’t as straightforward as I liked due to the complexity of the tool.
I’ve been away from the blog for a bit. Life got really busy back in July and I finally realized I was a bit over-committed and had to scale back for a bit. During the down time, I’ve been re-evaluating my business and trying to determine what things I want to continue doing. Blogging is definitely one of the things I plan to continue, but I needed to get some other things handled. I’ve thinking a lot about the direction in which I want to take my career as well, and I realized I was doing a lot of stuff that really isn’t what I want to do. However, that thought process also revealed a number of things that I DO want to do that will help my career go in the correct direction.
After the shooting that happened last week and the subsequent revelation that online forums may have contributed to that bowl-cut-weirdo’s radicalization and subsequent rampage, several people I know have approached me to ask what website owners with discussion boards could have done that could have prevented radicalization. There are some options, but I’ll warn you, most of them are pretty terrible and have a tendency to be easily circumvented (or to backfire spectacularly). The architecture of the internet is intended to be able to route around damage (it was designed to assist with communications after one or more nuclear strikes, after all), and censorship mimics damage in an architectural sense. So, most fixes are not particularly useful, although there are some options.
This past weekend, I spent a great deal of time reworking some parts of our data access layer for Agulus that have been problematic in the past, mostly by trying to get rid of places where we are using ExecuteScalar and places where we are are working with DataTables, as both places are very sensitive to changes in the underlying database schema and the errors don’t surface until runtime, which really stinks. I managed to figure out how to get all the metadata I needed to replace this functionality (some things could be easier….) and proceeded to start editing my templates to achieve this goal. As I did so, I started reflecting on how much I’ve learned, mostly the hard way, about how to manage larger sets of T4 templates while keeping things maintainable. I’ve not seen a lot of guidance floating around on how to deal well with this stuff, so here’s a list of a few things I’ve figured out (so far). Most of the guidance below sounds very much like the guidance you’d expect to see when building something using ASP.NET MVC, which if you think about it, makes a lot of sense since both share many similar concerns. None of these are earth-shattering, but people tend to forget that code that generates code should be maintained at the same quality level as code that is actually being shipped.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
namespace Geisha.Engine.Core.SceneModel
{
/// <summary>
/// Scene is collection of entities that build a single game environment e.g. single level. Scene represents particular
/// game state from the engine perspective.
/// </summary>
public sealed class Scene
{
private readonly IComponentFactoryProvider _componentFactoryProvider;
private readonly List<Entity> _entities = new(); // TODO Would HashSet be faster?
private readonly List<Entity> _rootEntities = new(); // TODO Would HashSet be faster?
private readonly List<Entity> _entitiesToRemoveAfterFixedTimeStep = new();
private readonly List<Entity> _entitiesToRemoveAfterFullFrame = new();
private readonly List<ISceneObserver> _observers = new();
/// <summary>
/// Creates new instance of <see cref="Scene" /> class.
/// </summary>
internal Scene(IComponentFactoryProvider componentFactoryProvider)
{
_componentFactoryProvider = componentFactoryProvider;
SceneBehavior = SceneBehavior.CreateEmpty(this);
}
/// <summary>
/// All entities in the scene that is all root entities and all their children. It can be used to find particular
/// entity even if it is only a part of certain complex object.
/// </summary>
public IReadOnlyList<Entity> AllEntities => _entities.AsReadOnly();
/// <summary>
/// Root entities of the scene. These typically represent whole logical objects in game world e.g. players, enemies,
/// obstacles, projectiles, etc.
/// </summary>
public IReadOnlyList<Entity> RootEntities => _rootEntities.AsReadOnly();
/// <summary>
/// Gets or sets <see cref="SceneModel.SceneBehavior" /> used by this <see cref="Scene" />. Default value is empty
/// behavior <see cref="SceneModel.SceneBehavior.CreateEmpty" />.
/// </summary>
/// <remarks>
/// Set <see cref="SceneBehavior" /> to instance of custom <see cref="SceneModel.SceneBehavior" /> implementation
/// in order to customize behavior of this <see cref="Scene" /> instance.
/// </remarks>
public SceneBehavior SceneBehavior { get; set; }
/// <summary>
/// Creates new root entity in the scene.
/// </summary>
/// <returns>New entity created.</returns>
public Entity CreateEntity()
{
var entity = new Entity(this, _componentFactoryProvider);
_entities.Add(entity);
_rootEntities.Add(entity);
NotifyEntityCreated(entity);
return entity;
}
/// <summary>
/// Removes specified entity from the scene. If entity is root entity it is removed together with all its children. If
/// entity is not root entity it is removed (together with all its children) from children of parent entity.
/// </summary>
/// <param name="entity">Entity to be removed from the scene.</param>
/// <remarks>
/// <see cref="Entity" /> removed from the <see cref="SceneModel.Scene" /> should no longer be used. All
/// references to such entity should be freed to allow garbage collecting the entity. Entity removed from the scene
/// may throw exceptions on usage.
/// </remarks>
public void RemoveEntity(Entity entity)
{
if (entity.Scene != this)
{
throw new ArgumentException("Cannot remove entity created by another scene.");
}
if (entity.IsRemoved) return;
while (entity.Children.Count != 0)
{
RemoveEntity(entity.Children[0]);
}
while (entity.Components.Count != 0)
{
entity.RemoveComponent(entity.Components[0]);
}
entity.Parent = null;
_entities.Remove(entity);
_rootEntities.Remove(entity);
entity.IsRemoved = true;
NotifyEntityRemoved(entity);
}
#region Internal API for Entity class
/// <summary>
/// Internal API for <see cref="Entity" /> class.
/// </summary>
internal void OnEntityParentChanged(Entity entity, Entity? oldParent, Entity? newParent)
{
if (newParent is null)
{
_rootEntities.Add(entity);
}
if (newParent != null && oldParent == null)
{
_rootEntities.Remove(entity);
}
NotifyEntityParentChanged(entity, oldParent, newParent);
}
/// <summary>
/// Internal API for <see cref="Entity" /> class.
/// </summary>
internal void OnComponentCreated(Component component)
{
NotifyComponentCreated(component);
}
/// <summary>
/// Internal API for <see cref="Entity" /> class.
/// </summary>
internal void OnComponentRemoved(Component component)
{
NotifyComponentRemoved(component);
}
/// <summary>
/// Internal API for <see cref="Entity" /> class.
/// </summary>
internal void MarkEntityToBeRemovedAfterFixedTimeStep(Entity entity)
{
_entitiesToRemoveAfterFixedTimeStep.Add(entity);
}
/// <summary>
/// Internal API for <see cref="Entity" /> class.
/// </summary>
internal void MarkEntityToBeRemovedAfterFullFrame(Entity entity)
{
_entitiesToRemoveAfterFullFrame.Add(entity);
}
#endregion
#region Internal API for SceneManager class
/// <summary>
/// Internal API for <see cref="SceneManager" /> class.
/// </summary>
internal void AddObserver(ISceneObserver observer)
{
if (_observers.Contains(observer))
{
throw new ArgumentException("Observer is already added to this scene.");
}
_observers.Add(observer);
foreach (var rootEntity in _rootEntities)
{
NotifyObserverAboutExistingEntityTree(observer, rootEntity);
}
}
/// <summary>
/// Internal API for <see cref="SceneManager" /> class.
/// </summary>
internal void RemoveObserver(ISceneObserver observer)
{
if (!_observers.Remove(observer))
{
throw new ArgumentException("Observer to remove was not found.");
}
foreach (var rootEntity in _rootEntities)
{
NotifyObserverToRemoveEntityTree(observer, rootEntity);
}
}
/// <summary>
/// Internal API for <see cref="SceneManager" /> class.
/// </summary>
internal void OnLoaded()
{
SceneBehavior.OnLoaded();
}
#endregion
#region Internal API for GameLoop class
/// <summary>
/// Internal API for <see cref="GameLoop.GameLoop" /> class.
/// </summary>
internal void RemoveEntitiesAfterFixedTimeStep()
{
foreach (var entity in _entitiesToRemoveAfterFixedTimeStep)
{
RemoveEntity(entity);
}
_entitiesToRemoveAfterFixedTimeStep.Clear();
}
/// <summary>
/// Internal API for <see cref="GameLoop.GameLoop" /> class.
/// </summary>
internal void RemoveEntitiesAfterFullFrame()
{
foreach (var entity in _entitiesToRemoveAfterFullFrame)
{
RemoveEntity(entity);
}
_entitiesToRemoveAfterFullFrame.Clear();
}
#endregion
#region Observers notifications
private void NotifyEntityCreated(Entity entity)
{
foreach (var observer in _observers)
{
observer.OnEntityCreated(entity);
}
}
private void NotifyEntityRemoved(Entity entity)
{
foreach (var observer in _observers)
{
observer.OnEntityRemoved(entity);
}
}
private void NotifyEntityParentChanged(Entity entity, Entity? oldParent, Entity? newParent)
{
foreach (var observer in _observers)
{
observer.OnEntityParentChanged(entity, oldParent, newParent);
}
}
private void NotifyComponentCreated(Component component)
{
foreach (var observer in _observers)
{
observer.OnComponentCreated(component);
}
}
private void NotifyComponentRemoved(Component component)
{
foreach (var observer in _observers)
{
observer.OnComponentRemoved(component);
}
}
private static void NotifyObserverAboutExistingEntityTree(ISceneObserver observer, Entity entity)
{
observer.OnEntityCreated(entity);
if (!entity.IsRoot)
{
observer.OnEntityParentChanged(entity, null, entity.Parent);
}
foreach (var component in entity.Components)
{
observer.OnComponentCreated(component);
}
foreach (var child in entity.Children)
{
NotifyObserverAboutExistingEntityTree(observer, child);
}
}
private static void NotifyObserverToRemoveEntityTree(ISceneObserver observer, Entity entity)
{
foreach (var child in entity.Children)
{
NotifyObserverToRemoveEntityTree(observer, child);
}
foreach (var component in entity.Components)
{
observer.OnComponentRemoved(component);
}
if (!entity.IsRoot)
{
observer.OnEntityParentChanged(entity, entity.Parent, null);
}
observer.OnEntityRemoved(entity);
}
#endregion
}
}
|
STACK_EDU
|
Why are many lefties so thick? Three recent events pose this question: the mob that harassed Douglas Carswell; Len McCluskey's threat to sue Nick Cohen for libel; and the vandalism of a war memorial by anti-Tory protestors.
There's a common theme here. These episodes reinforce the worst image of the left - that it (we) are self-righteous bullies who care nothing for the liberties or sensibilities of others. I fear, therefore, that they actually detract from the left's cause.
For this reason, they reinforce Nick's criticism - that the left has lost "any notion of how to change a society."
Of course, social change is vastly complex and poorly understood. But I'd suggest that narcissistic posturing is perhaps not the best way to achieve it. Feudalism did not give way to capitalism because villeins protested their moral superiority to their lords, so perhaps capitalism won't convert to socialism this way either.
So, what can lefties do instead?
We could start by heeding Rebecca Winson's advice, and find a way to preach to the unconverted. For some of us, this means blogging about how inequality imposes social and economic costs; how austerity is based upon economic illiteracy; how Tory policies have real human costs; or how tolerance of injustice is based in part upon cognitive illusion. Sure, our audience is small. But if millions of people have a few, rational discussions with millions of others, it adds up.
In this context, language matters. I'm not sure that "fuck Tory scum" is a way for the left to win friends. Nor do I like the phrase "ordinary working people": who wants to think themselves ordinary?
Another thing is to find stepping stone changes: apparently small changes that can lead to others. My call for Brailsfordism might fit this bill: in inviting workers to suggest improvements, it is intended to build class consciousness - to embolden workers to recognize that they, and not bosses, have the potential to organize institutions themselves.
Here, there might be a case for those much-derided "safe spaces": fora in which marginalized groups can speak without being dominated by white men might give some people the confidence to become more politically active*.
Yet another thing we can do is to encourage socio-technical change. There's widespread agreement that Marx was right that "the mode of production of material life conditions the general process of social, political and intellectual life" - that "the hand-mill gives you society with the feudal lord; the steam-mill society with the industrial capitalist." The work of Jeremy Greenwood and Ian Morris vindicates this view.
Perhaps another technology-induced social change is occuring. For example, the collapse in the cost of storing and transmitting information makes dencentralization - worker control - feasible where previously there was hierarchy. And lower capital requirements might be undermining the monopoly power of big capitalism in favour of smaller companies. We can encourage this for example by spending our money at indepedent coffee shops, craft breweries, worker coops or through P2P lending rather than at capitalist firms.
What I'm saying here is that the transition to a better society might occur not (just) by protesting or waiting for a big bang revolution, but as a result of countless small individual actions, which might have echoes throughout society. Obviously, I don't know what all these actions should be - but if the left can apply millions of brains to the question, it might find some answers.
* Jason Brennan counters than safe spaces can be infantilizing. It's possible that both views are right, depending upon the precise institutional context from place to place.
|
OPCFW_CODE
|
POST sent via AJAX comes out empty
So I have the following jQuery block;
var file = "something.html";
$.post("/load", {which: file}, function(data){
// do stuff
});
Which is supposed to call a backend that loads a file. The load controller (I'm on CodeIgniter 2.1.3) looks like so;
class Load extends MY_Controller {
public function index()
{
$file = $this->input->post("which");
$content = get_file_contents($file);
echo $content;
}
}
However, for some reason, the $file variable remains empty.
Now, I've tested that the load controller does actually work, by changing the $this->input->post() into a $this->input->get() and visiting the page from the browser (e.g. /load?which=something.html); the correct file is loaded and echoed.
I've debugged the code by using the plain $_POST variable instead of the CodeIgniter handler. $_POST["which"] comes up as an undefined index error.
As per suggestions from other threads, I've also tried var_dump() on $_POST and file_get_contents('php://input'), and they both come up as empty. For all intents and purposes the backend is just not receiving any POST data.
From the Network section of my Inspector, I can see that the AJAX call is indeed sent, and the "Form Data" section contains which: something.html as it should. The request is returned with a 404 Not Found status code. However, I do know that the proper controller is indeed contacted as adding echo "test"; exit; into the controller before it all does indeed return "test" for the AJAX request.
As far as I've found solution candidates online, my php.ini should be proper; I've made sure that variables_order contains P and post_max_size is a reasonable size. And as some answerers suggested that it might be an .htaccess redirect thing, I did try adding a trailing slash to the URL that the JS requests, but all to no avail.
Here's my .htaccess for reference;
# Customized error messages.
ErrorDocument 404 /index.php
# Set the default handler.
DirectoryIndex index.php
# Various rewrite rules.
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [L,QSA]
</IfModule>
This same code has worked in the past, but now I'm on a new computer with a fresh install of Apache (2.2.22) and PHP (5.3.13), which leads me to believe that this is most likely some sort of config issue with either Apache or PHP. I don't know if that makes this thread more suitable for Server Fault than Stack Overflow, but as I am a programmer and not a system admin, I thought to see if anyone here has any thoughts. Thanks!
Have you tried wrapping which around quotes - {"which": file}?
what does it shows when you try yoursite/load ?
are u missing site_url() there.... $.post(<?php echo site_url('load') ?>.......
@asprin that change will be consumed by the javascript interpreter and will not be visible inside $.post
What was the PHP version where it worked?
@JanDvorak Didn't knew that. I always quote the keys of the data being passed :)
I don't see the Options line. Try adding this Options +FollowSymlinks -MultiViews on top of the rewrite rule-set.
@asprin it's required in JSON, but not by Javascript
@faa That only seems to produce a 500 Internal Server Error
Where did you do the var_dump($_POST)? Try doing that in index.php - very first line, then exit. Helps narrow down where the breakdown in communications is
@JanDvorak Just dug up my old laptop; the previous localhost was running Apache 2.2.21 and PHP 5.3.10. Because of the version change I didn't move over the php.ini and Apache conf files directly, but tried to replicate every config edit from the previous setup on the fresh files.
@Robbie I did it on the very first line of the index function in the load controller.
@EmphramStavanger try diffing the two .ini files
Check MAX_POST_SIZE is not 0 (or invalid - i.e. check it's 8M and not 8MB). That will give you what you experience.
@JanDvorak Yup, that was it. I made sure I had all the same Apache modules and PHP extensions on that I had on the old machine and now it works again. Duh. Thanks everybody! :)
@EmphramStavanger you should compile your findings and solution into a self-answer and post it for the benefits of others.
Based on Jan Dvorak's sharp suggestion, I compared the configuration of the current environment with that of the one where the code last worked, and I made sure the lists of enabled Apache modules and PHP extensions were identical.
During this process I turned on the following Apache modules in addition to the ones I already had on;
auth_digest_module
dav_lock_module
headers_module
info_module
proxy_module
proxy_ajp_module
ssl_module
status_module
I also turned on the following PHP extensions that were not on previously;
php_bz2
php_curl
php_exif
php_gettext
php_imap
php_openssl
php_pdo_odbc
php_pdo_sqlite
php_soap
php_sockets
php_sqlite
php_sqlite3
php_xmlrpc
A smarter man than me can tell me which module or extension or combination thereof was the exact solution for this issue.
However, for me, enabling these modules and extensions allowed me to make sure my environment was similar to my previous one, and in this case, solved the issue for me with no changes required to the code or .htaccess.
|
STACK_EXCHANGE
|
Obtaining the Frequency of Variables in SAS
I am attempting to determine the population number and frequency of total caloric intake among two groups of individuals (males equal to or above 3000 calories and those males equal to or below 1800 calories). However, when I attempt to break up my dataset using SAS it has an issue computing separate categories for the two groups even though in my code I request that the male variables break up between TOTALcat=1 and TOTALcat=2. Below is my code. That said, what further steps do I need to take to determine both the population number and frequency of these groups within my larger dataset? Thanks
libname lab "C:\Users\14015\Pictures\BST #1";
data nutrition;
set 'C:\Users\14015\Pictures\BST #1\nutrition';
run;
proc contents data = nutrition;
run;
proc format;
value sex
1 = 'MALE'
2 = 'FEMALE';
run;
data nutrition;
set nutrition;
CARBS = CARBS*4;
FAT = FAT*9;
PROTEIN = PROTEIN*4;
run;
data nutrition;
set nutrition;
TOTAL = sum(of CARBS FAT PROTEIN);
run;
proc print data = nutrition;
var TOTAL CARBS FAT PROTEIN SEX;
run;
data nutrition;
set nutrition;
if sex=1 and TOTAL>=3000 then TOTALcat=1;
else sex=1 and TOTAL<=1800 then TOTALcat=2;
run;
proc freq data=nutrition;
table TOTALcat;
run;
It does not produce a frequency or population number for either male category out of my 8000+ observation set.
It's not a good idea to code like this continuously data nutrition; set nutrition;
You can combine all of your steps into a single data step. Your else statement has a typo and should be:
else if(sex=1 and TOTAL<=1800)
data nutrition;
set lab.nutrition;
carbs = carbs*4;
fat = fat*9;
protein = protein*4;
total = sum(carbs, fat, protein);
if(sex=1 and TOTAL>=3000) then TOTALcat = 1;
else if(sex=1 and TOTAL<=1800) then TOTALcat = 2;
run;
proc freq data=nutrition;
where sex = 1;
table TOTALcat;
run;
Using sample data, this is the output:
The total population size is 34, with 21 in Group 1 and 13 in Group 2.
|
STACK_EXCHANGE
|
I have imported a couple of images through Raster Design 2011 and I have 19 layouts but when I'm in a layout and I zoom in and out the images seem to be shifting. The other problem I'm dealing with is that when I create PDF's the vector info. does not lineup with the image. Each layout view has 4 viewports and there are images in two of the viewports with a rotation on them. I have uped my memory in the raster options under setup and I'm using XP OS with 4 gb of memory.
Any help would be apprecite it.
Solved! Go to Solution.
I've had similar issues with .SID aerial imagery in both Map3D and Civil3D when the imagery was inserted by the "Mapiinsert" command, which is similar to RD's attachment process. 2011 had an update for FDO connected images, but working with a support request to AD, i basically was told that there wasn't much that could be done as the "old way of doing it" was being replaced through the FDO technology. long story short - I've seen the same problem but never had a solution to my issue. I wish you luck with it - if you're on subscription, I'd encourage you to open a support request with AD directly
>> I'm using XP OS with 4 gb of memory.
IMHO this is the source of your problem, you can't use more than 2GB of memory for the application (for AutoCAD) and that results sometimes in problems when plotting with large images.
There may be two options you can use now:
Check with your admin if you can use the 3GB-switch, it depends on what drives may not support it (especially gc-drivers are sensitive), what image-types you use (there are some that would then crash AutoCAD when loading them) but it's a great option for getting about 1GB more memory for AutoCAD to use and so less problems for plotting.
Look >>>here<<< for how to activate it.
And to the risk: you can set WinXP to start in both modies, one to use the 3GB-switch and the other to not use it. So if the admin gives you the option to boot Windows in both variants you must not be afraid of crashes as long as you do a little bit more SAVEs as normal. And if you see that your pc crahses now when using specific commands or loading special image-formats or running specific applications you can just boot again not using the switch and that's it.
Plot to DWF
Try to plot to DWF, so instead of plotting to the plotter create a DWF-plot instead and use then DesignReview to open the DWF-file and print it to the plotter. On one side DWF uses less memory to plot, you are not dependent of any plotterdriver plus you have the option (within the details of your DWF-pc3 settings) to reduce the resolution it it's really necessary.
The best option to avoid the whole problem would be to upgrade to Win7 64bit, of course.
- alfred -
Do you have the same issue with a new file created from scratch with only a few layouts and images in it?
Try placing the images and geometry close to the origin (point 0,0,0).
If the issue persists, can you please upload your files so that we can have a look?
Product Support Specialist
Thanks for all the feedback, it seems that it comes down to memory at the end of the day using XP. If I work in model space and clip the image to only what I need I don't run into any problems in Layouts it seems to be a problem. The images I'm working with are huge (1.2gb). I've tried this in Autocad 64bit Windows 7 with 16 gb of memory and the problem dissapears.
I am also having an issue with my designs not lining up with my background aerial images when I zoom in and out in paper space. This is only an issue when I use the dview comand and have twisted my drawings. Does anyone have a suggestion?
Does it happen with all the files or only one?
Can you reproduce it with multiple machines?
Do you have a supported graphics card and driver?
Is the geometry too far from the origin (0,0,0)?
|
OPCFW_CODE
|
A Python library for easy creation and manipulation of Google Earth KML and KMZ placemark files. Please get your copy from http://pykml.cvs.sourceforge.net/viewvc/pykml/pykml/?view=tar
A neural net module written in python. The aim of the project is to provide a large set of neural network types accessed by an API that is easy to use and powerful.
A threaded Web graph (Power law random graph) generator written in Python. It can generate a synthetic Web graph of about one million nodes in a few minutes on a desktop machine. It implements a threaded variant of the RMAT algorithm.
Bit operations on integers for Python - fast C implementation of bit extraction, counting, reversal etc.
Library of Object Oriented/Functional Sorting Algorithms aimed for the taste of distributed computing.
Abandoned version of qbc. DOES NOT WORK properly
SenseRank Sys: - builds the dictionaries (multidim matrices) of words’ values; - for the set utterance in certain language builds a figure in multidimensional space (in the matrix space) of values (visual schema), which is topological view of sense
Slavica is a python library for classification, clustering and document retrieval.
Design and develop Recommendation and Adaptive Prediction Engines to address eCommerce opportunities. Build a portfolio of engines by creating and porting algorithms from multiple disciplines to a usable form. Try to solve NetFlix and other challenges.
This project has moved to GitHub: https://github.com/pvaret/spyrit
Creates and operates a stepped state machine
Implements a stepped state machine, i.e. a state machine which executes a single state transition at a time. Because of this, no data, e.g. state data, can be stored between executions. Instead, any such data must be stored in persistent storage between executions. This permits operation of the state machine as a CGI program in a web server. A WSGI or fastCGI or other such web server is not required. Received symbols may be received from sources outside the state machine, or may be generated internally by the state functions.
Sudoku Maker is a generator for Sudoku number puzzles. It uses a genetic algorithm internally, so it can serve as an introduction to genetic algorithms. The generated Sudokus are usually very hard to solve -- good for getting rid of a Sudoku addiction.
The Movinator is a movie database application. It manages information about movies plus ratings assigned to movies by movie critics. Based on these ratings and user ratings, the application can also make movie recommendations.
ansible is a research framework for developing neural networks, written in Python. ansible provides three basic capabilities; a UI server; network statistics for system analysis; and a 3-layered network processing architecture.
Can emulate old computer on text the code pogram ideal to create kernel for operating system The begin the project the can emulate CPU(8080)and z80.
BA-Arbeit 2010 TU Dortmund
compactpath is a python package to handle compacting of filepaths. compacting of filepaths may be useful in gui programming where filepaths of arbitrary lenght have to be displayed in widgets with limited visual space.
Algorithms that run our universe | Your personal library of every algo
Fast cython implementation of trie data structure for Python. Development is inactive, but moved to: http://github.com/martinkozak/cytrie.
eBarter froms trades from commitments. It measures economic values without the need of a common value standard; distributes fairly the gain produced between participants and gives the preference to trades where participants have the weakest requirements.
C++ collection mostly for image processing
libGo is a C++ class library containing all kinds of things that proved useful to me. Included are: - Linear algebra, using LAPACK and CBLAS - V4L(1) image grabber - Multithreading - Image containers (up to 3D) - Some simple optimisation code - Python embedding helper - Matlab interface - .. and other things, have a look at the HTML documentation! golib grew over many years, things I had use for have been added now and then. Some parts are better taken care of than others. If you find anything spectacularly wrong or badly documented, and need assistance, please drop me a line.
A simple (~20 line python) O(n^6) algorithm for the traveling salesman problem that seems to do pretty well for most graphs; so well that I have not been able to find a graph which it does optimally solve. Those with spare cycles are welcome to help out.
Leet is CCEx's software application for on-the-fly encryption (OTFE).
The name leet stands for "Linux exquisite encryption tool", it will be a software application for on-the-fly encryption, similar in its functionality to TrueCrypt. The goal of leet however is to be simpler and as user friendly as possible, making encryption and securing of information accessible to anybody, even those who don't necessarily have any prior knowledge of data securing, algorithms and encryption. However it's not targeted at this group of users only, part of the ambition of this project is to reach companies, institutions, governments (etc...) as well.
Python module to track the overall median of a stream of values "on-line" in reasonably efficient fashion.
pyMVC is a Model-View-Controller implementation library and framework written in Python for fast and high-grade software development.
|
OPCFW_CODE
|
Contributors would give the GORGON permission to reproduce their stuff. While I will happily layout some material for inclusion, the intention would be for YOU to layout your own stuff in 8.5x11 (.5" margins!) with all fonts embedded. This will cut down the time of production considerably and give GORGON a kind of punk look, with no continuity of style from article to article.
GORGON would be sold at-cost as a POD item from Lulu. So about 2,000 coppers (~$20) including shipping and handling.
I will take pains to not include any illustrations or artwork that is not from contributors or in the public domain. There is no shame in GORGON's game.
LINK: Gorgon Quarterly G+ community
SUBMISSION INFO & GUIDELINES
1. By submitting your stuff to GORGON, you give us permission to publish it in perpetuity on a not-for-profit basis.
2. In other words, I WON'T MAKE A RED COPPER ON YOUR WORK. GORGON pdfs are free to download, and GORGON print products will be sold strictly at-cost on Lulu.
3. All submissions should be in PDF format. Dimensions 8.5" x 11" (.5 inch margins!) with all fonts embedded. Greyscale/b&w. REMEMBER: The way you lay it out is the way it's going to look in GORGON. More info on embedding fonts: http://tinyurl.com/k98286f
4. Submit stuff to the GORGON QUARTERLY Google drive here: http://tinyurl.com/pdv7oag
5. Make sure you have the rights to any illustrations you use in your submission. Whether you're an illustrator or an author, you retain the rights to your work, and you can use that work anywhere you like, regardless of its appearance in GORGON. Likewise, you can use work that you previously published elsewhere as long as you have the rights to do so.
6. There are NO LIMITS on the page-counts of your submissions, and NO LIMIT to the number of items you submit. That said, there's no guarantee that everything you submit will appear in GORGON right away or in the same issue. I will try to be flexible to your creative intentions, however.
7. There are no rules governing the TYPES of things you choose to submit. If it's something that would appear in an OSR blog post and it's good enough for the dead tree treatment, then it's allowed. If possible, I will attempt to keep the contents of a single issue loosely related thematically, or organized into a set of loosely related themes.
8. OGL: If you want to include an OGL license with your submission, please make it a separate PDF document that CLEARLY INDICATES what work(s) it addresses. If you have no interest in writing up and including something like this, you'll be covered anyway by a general purpose Creative Commons license that will appear on the last page of each issue.
|
OPCFW_CODE
|
#Read MP4 metadata
import os
import openpyxl
from ffprobe import FFProbe
Path = 'videos/'
folderList = os.listdir(Path)
#Create EXCEL file
wb = openpyxl.Workbook()
#Create table labels
hoja = wb.active
hoja.title = "Videos"
hoja["A1"] = 'No'
hoja["B1"] = 'FIELD'
hoja["C1"] = 'VIDEO FILE'
hoja["D1"] = 'CREATION DATE'
hoja["E1"] = 'CRETION TIME'
#Iterate over the folders of videos
i = 2 #Init cell
for folder in folderList:
folderPath = os.path.join(Path,folder)
#Iterate over each video on folder
videoList = os.listdir(folderPath)
for video in videoList:
videoPath = os.path.join(folderPath,video)
if video[-3:] == 'mp4':
#Get matadata of the video
metadata = FFProbe(videoPath)
datafile = metadata.metadata['creation_time']
#Save creation information
print(metadata)
hoja["A"+str(i)] = i-1
hoja["B"+str(i)] = folder
hoja["C"+str(i)] = video[0:-4]
hoja["D"+str(i)] = datafile[0:10]
hoja["E"+str(i)] = datafile[11:19]+' GTM-0'
i += 1
#Save file
wb.save('Lista de Videos.xlsx')
print('DONE:::::::::::::::::::::::::::::::')
|
STACK_EDU
|
By: Zhuohan Zhang, RIG Inc Intern Researcher
Since 2020, Covid_19 has negatively impacted our daily life. The coronavirus disease pandemic took many people’s lives and seriously affected various countries’ economic and social development. To prevent such a disease, we can construct complex networks, which can reveal necessary information about disease transmission across countries. By definition, a complex network is a graph with non-trivial topological features that do not occur in simple networks such as lattices or random graphs, but often occur in networks representing real systems (Wikipedia). Scholars have found that “a complex network with a community structure can promote or effectively inhibit the spread of diseases” (Stegehuis C, Van Der Hofstad R).
In a complex system features such as nodes and connections, need to be defined. Complex networks are a set of many-connected nodes that interact in different ways. For example, it can be used to describe friendship relationships where two people are connected if they are friends. Complex networks can be defined in a family relationship where two people are connected if they belong to the same close family. Network is defined as two computers that are connected if they are in the same domain, a Covid network is characterized as two people infected with Coronavirus if they have been in close contact. The following graphs intuitively explained the structure of complex networks where the nodes below represent people and lines represent the connection. One infected person, initial node in the network, did not wear a face mask and stayed in a closed room with another person. The second person is highly likely to test positive for Covid, and therefore a line is connected between these two people. As more and more people interact with the two people, more nodes are added, and more lines begin to connect. This forms a complex network.
Detailed principles underlying complex network is topology. To understand the structure of a network, we need first to have the degree of distribution, which is the probability that a randomly chosen node has connections. We also need to identify the aggregation coefficient, which is defined as the probability that two nodes directly connected to a third node are connected to each other. Also required is the minimum length between two nodes, which is the minimum number of steps to reach one node Vi from a node Vj and the average length in the network (MIT edu). As more and more research is completed scholars, we can come to the conclusion that the global COVID-19 pandemic has some prominent complex network properties (Zhu, Kou, Lai, Feng, Du). Since it varies over time, here, a time-varying dynamical network is more appropriate to describe the synchronization phenomena. It is determined by the inner-coupling matrix, and by the eigenvalues and the corresponding eigenvectors of the coupling configuration matrix of the network. (Lu Chen)
In addition, RIG’s Dynamic Trust models can be applied to complex networks as well. It is a useful model for trust evaluation, and dynamically predicts trust levels based on given data. As we gain more information about the agent, device, or service, we develop a trust level. RIG’s Dynamic Trust models have extensive applications, such as healthcare fraud evaluation, connected car evaluation, service rating and so on. We can apply RIG’s Dynamic Trust models combined with machine learning techniques, to predict Covid_19 disease transmission across countries. On each node of a complex network, we can identify symptoms for infected people, ranging from mild symptoms to severe illness. Mild symptoms include fever, cough and shortness of breath; moderate symptoms include headache, new loss of taste or smell, and sore throat. Severe symptoms include congestion, nausea, and diarrhea. After assigning these levels, supervised machine learning models, such as SVM or random forest, are able to classify the data. The outputs determine the probability of infection. The higher the level, the higher the possibility of being a high–degree node in a complex network. Due to the connectivity of high-degree nodes, link removal strategies suggest that complete isolation of susceptible nodes from infected nodes is an effective method for reducing the average number of new infections, and it is a way to prevent the spread of COVID-19 (M. Bellingeri). Thus, the combination of Dynamic Trust models and machine learning techniques can help predict the transmission of the Covid_19 virus and provides a effective and efficient method to prevent the spread of the virus.
M.Bellingeri, M. Turchetto, Modeling the Consequences of Social Distancing Over Epidemics Spreading in Complex Social Networks: From Link Removal Analysis to SARS-CoV-2 Prevention, 28 May 2021
Stegehuis C, Van Der Hofstad R, Van Leeuwaarden JS. Epidemic spreading on complex networks with community structures. Sci Rep (2016) 6(1):29748–7. doi:10.1038/srep29748
|
OPCFW_CODE
|
Molecular and Nanoscale Electronics
Molecular (nanoscale) electronics is attracting great attention due to potential applications in future sensor devices, computing technology and related fields. The work in the Bryce group is highly interdisciplinary involving close collaboration with experimentalists (Universities of Liverpool, Bern, Basel, Madrid) and theoreticians (University of Lancaster). In this context we are developing monodisperse oligomers which are ca. 2-10 nm in length ("molecular wires") comprising conjugated backbones with terminal substituents (e.g. thiol, pyridyl) which assemble onto metal electrodes to provide metal | molecule | metal junctions. These are oligo(aryleneethynylene) derivatives,1 e.g. molecule 6, tolane derivatives,2 oligoyne derivatives 73 and oligofluorene derivatives4 for probing the electrical properties of single molecules. We have also designed and synthesised molecules to bridge silicon nanogaps.5
Related projects probe charge transport through oligoyne systems end-capped with donor and acceptor units;6 molecule 8 is a prototype. Oligofluorene bridges have been probed as molecular wires in molecules such as 9 (with Dirk Guldi, University of Erlangen-Nurnberg).7 A wide range of techniques are applied to the study of these molecules, including cyclic voltammetry, spectroelectrochemistry, steady-state and time-resolved photolysis, X-ray crystallography and EPR spectroscopy.
Figure1. Single-molecule electronics. Left image: Tolane molecules with different anchor groups sandwiched between gold electrodes (Durham-Bern-Lancaster collaboration; figure courtesy of the University of Bern). Right image: A fluorene molecular wire with fullerene anchor groups assembled on a gold surface probed with an STM tip (Durham-Madrid collaboration; figure courtesy of IMDEA-Nanosciences, Madrid).
- W. Haiss, C. Wang, I. Grace, A. S. Batsanov, D. J. Schiffrin, S. J. Higgins, M. R. Bryce, C. J. Lambert, R. J. Nichols, Nature Materials 2006, 5, 995; R. Huber, M. T. Gonzalez, S. Wu, M. Langer, S. Grunder, V. Horhoiu, M. Mayor, M. R. Bryce, C. Wang, R. Jitchati, C. Schoenenberger, M. Calame, J. Am. Chem. Soc. 2008, 130, 1080; C. Wang, M. R. Bryce, J. Gigon, G. J. Ashwell, I. Grace, C. J. Lambert, J. Org. Chem. 2008, 73, 4810; S. Martin, I. Grace, M. R. Bryce, C. Wang, R. Jitchati, A. S. Batsanov, S. J. Higgins, C. J. Lambert, R. J. Nichols, J. Am. Chem. Soc. 2010, 132, 9157-9164.
- W. Hong, D. Z. Manrique, P. M. García, M. Gulcur, A. Mishchenko, C. J. Lambert, M. R. Bryce, T. Wandlowski, J. Am. Chem. Soc. 2012, 134, 2292.
- C. Wang, A. S. Batsanov, M. R. Bryce, S. Martin, R. J. Nichols, S. J. Higgins, V. M. Garcia-Suarez, C. J. Lambert, J. Am. Chem. Soc. 2009, 131, 15647.
- E. Leary, M. T. González, C. van der Pol, M. R. Bryce, S. Fillipone, N. Martín, G. Rubio-Bollinger, N. Agrait, Nano Lett. 2011, 11, 2236-2241.
- G. J. Ashwell, L. J. Phillips, B. J. Robinson, B. Urasinska-Wojcik, C. J. Lambert, I. M. Grace, M. R. Bryce, R. Jitchati, M. Tavasli, T. I. Cox, I. C. Sage, R. P. Tuffin, S. Ray, ACS Nano 2010, 4, 7401-7406.
- C. Wang, L.-O. Palsson, A. S. Batsanov, M. R. Bryce, J. Am. Chem. Soc. 2006, 128, 3789; L.-O. Pålsson, C. Wang, A. S. Batsanov, S. M. King, A. Beeby, A. P. Monkman, M. R. Bryce, Chem. Eur. J. 2010, 16, 1470-1479.
- M. Wielopolski, G. de Miguel Rojas, C. Van der Pol, L. Brinkhaus, G. Katsukis, M. R. Bryce, T. Clark, D. M. Guldi, ACS Nano 2010, 4, 6449-6462.
Contact: Martin Bryce (email@example.com) for more details.
|
OPCFW_CODE
|
Debugging issues with CloudKit subscriptions
Q: My CloudKit subscriptions don’t trigger notifications after relevant changes are made. How to debug that?
A: Most CloudKit subscription issues are either due to incorrect assumptions about when and where notifications should fire or improperly configured CloudKit containers or subscriptions. This document will introduce you some cases that aren't expected to trigger CloudKit notifications, following with how to verify the state of your iCloud container and how to avoid some common issues related to CloudKit subscriptions.
Cases that aren't expected to trigger CloudKit notifications
When working with CloudKit subscriptions, be aware that:
Notifications won't be delivered to your device if the notification settings for your app are off. CloudKit relies on the Apple Push Notification service (APNs) to deliver notifications. If your app is not allowed for push notifications on the device, you won't see them. To turn the settings on, go to
Settings > Notifications, then navigate to your app's setting screen.
CloudKit notifications won’t be delivered to the device on which the relevant changes are made.
The Simulators don't support push notifications. To test push notifications you must be running directly on the target platform.
CloudKit notifications won’t be delivered to your app if the notifications'
trueand meanwhile your app is force quit. On iOS, users can force quit an app by double-tapping the home button and swiping it away from the multitasking UI.
CloudKit generates a notification for every relevant change. However, notifications can be coalesced by the APNs if too many occur in a short period time. In that case, you will still get the last notification, thus can retrieve the unhandled ones with
CKFetchNotificationChangesOperationand process them from there.
With these cases in mind, if your issue is still there, the next step is to verify the state of your iCloud container.
Verify the state of your iCloud container
You can verify the state of your iCloud container with the following steps:
Prepare two iOS devices running the latest iOS, log in iCloud with the same Apple ID, and make sure iCloud Drive is On. You can use an iOS Simulator to change the CloudKit database, but only a device can register and receive push notifications. CloudKit works on macOS and tvOS as well, so you can set up Macs or Apple TVs similarly if you are using a macOS or tvOS app for this verification.
Download Apple's CloudKit Catalog sample, change the bundle ID to the one being used in your app, pick the right iCloud container in Xcode’s
Capabilitiespane if you use a custom one, then build and run the sample on your devices. Make sure notification settings are allowed, as discussed above.
Create a query subscription to track record creations. On one device, run the CloudKit Catalog sample, go to
saveSubscriptionscreen, set the
Query, input something in the
name BEGINSWITHfield that will be used in the subscription predicate, then tap the
One the other device, go to
saveRecordscreen and add a new record, make sure the record name begins with the string you just input so that it matches the subscription predicate, then tap the
Your first device should soon get a notification, proving that your iCloud container is functioning. If this does not work, it is likely that your iCloud container is in an invalid state. In that case, you can report it by filing a bug report, and continue your development using a new iCloud container and / or logging in iCloud with a new Apple ID.
To use a new iCloud container, go to the Xcode's
Capabilities panel, switch the
Containers options to
Specify custom containers, then pick or add a new container. To create a new Apple ID, follow the steps on the Create Your Apple ID page.
With the same iCloud container and Apple ID, if CloudKit Catalog works well but your app doesn’t, the next step is to check if your app has any coding issue related to CloudKit subscriptions.
Avoid common issues related to CloudKit subscriptions
The following issues are frequently seen when working with CloudKit subscriptions:
Apps haven't registered for push notifications.
When you pick the CloudKit service on the
Capabilitiespane, Xcode automatically adds the Push Notifications service for your app so you don't need to do extra configuration on your app ID in the membership portal. However, like other apps, CloudKit apps need to register for push notifications as well by calling
registerForRemoteNotifications()method. See the Registering for Push Notifications section of TN2265 for the details.
Note that CloudKit does handle device tokens for you, so you don't need to implement the
application(_:didRegisterForRemoteNotificationsWithDeviceToken:)method, unless you need that for another purpose.
Apps fail to create subscriptions.
Creating subscriptions can fail for a number of reasons, so error handling when saving subscriptions is very important. Due to the asynchronous nature of CloudKit work, error handling is an essential part of writing a CloudKit application. See the WWDC session, CloudKit Best Practices, for how to better handle CloudKit errors.
One common error is saving subscriptions that refer to un-indexed fields in the production environment, which will generate an Invalid Arguments (
invalidArguments) error. You can fix it by going to the production environment in CloudKit Dashboard, selecting the field, and checking the
Querybox under the
Apps don't create subscriptions for every user.
CloudKit subscriptions are per-user, which means your app needs to create subscriptions for each and every user that should receive push notifications. Logging in CloudKit Dashboard with your account and seeing subscriptions there doesn’t mean all users have them.
For debugging purpose, you can use
CKFetchSubscriptionsOperationto fetch and look into all subscriptions for the current user. However, you don’t need to check the existence before creating subscriptions. Instead, you can create subscriptions with specified subscription IDs and save them directly – if a subscription with the same ID has already existed on the server, no duplicate will be created.
The changes don’t really match the subscription predicate.
When creating a subscription whose predicate contains a
CKReferencefield, be sure to use a
CKRecordIDobject as the value for the field when creating the predicate. Using a
CKRecordobject in this case doesn't currently work.
To rule out issues related to predicate formats and values in debugging time, simply replace your predicate with
truePredicateand check if the issue is still there.
Notifications get lost in the delivery process.
CloudKit notifications can get lost if something is wrong on the network connection used by the APNs. If your issue doesn't fall into the above cases, check if notifications are lost in the delivery process by installing the Persistent Connection Logging profile on your device, then analyzing the push service logs. See the Observing Push Status Messages section of TN2265 for how to do that.
Document Revision History
New document that explains how to debug the issue that CloudKit subscriptions don't trigger notifications when the relevant changes are made.
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
Handling errors – Progress® Software Documentation – Trapping errors within a procedure using NO-ERROR option. The following example assumes that you created the sp_img table and defined the sp_img_out stored procedure with two data types, integer and image, in the MS SQL Server as shown in the following code samples:.
Microsoft SQL Server 2016 includes. range more simple and almost error free. The ESBI database stores historical data in slowly changing dimension (SCD) type-2 mechanisms that are powered by manually written SQL stored.
Background for this Sql Server Tutorial: In 2011 my wife joined as a Technical Trainer in one of the Leading Chain of Institute in Bangalore as.Net Technical Trainer.
The following article introduces the basics of handling errors in stored procedures. to implement error handling. in the SQL Server error log and the.
If any errors. Server. You can do this by using SQL Server Management Studio to clear the table. For example, you can run the following SQL query: DELETE FROM [My NAV Database Name].[dbo].[Server Instance] (Optional) Before.
SQL Server: Error Logging and Reporting within a Stored Procedure – Mar 2, 2015. sql server error reporting in stored procedures divide by zero exception. The above calculation fails and prints an error message in an error message window Divide by zero error encountered. Now you are seeing errors on-screen, but in a production environment, you do not get such flexibility. Therefore.
To see TRY – CATCH construct in action, try to execute the following query in SQL SERVER 2005 / 2008 / 2012. Error-SQL. The result will be something like:. Once you execute the above code it will create a Stored Procedure that can be called from any other Stored Procedure for error handling. As you can see above,
Some of the important features SqlOps provides include a T-SQL editor with autosuggestions and error checking. explorer lets developers browse through a SQL Server and view tables, views, stored procedures and more. From an.
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse SQL Server is a central part of the Microsoft data platform. SQL.
I have a single "Pre-Processing" Stored procedure which calls multiple stored procedures inside. Finally when all my SPS (inside) have successfully executed, I want.
This is a restriction in SQL Server and there is not much you can do about it. Except than to save the use of INSERT-EXEC until when you really need it.
Dec 31, 2016. At Derivco we have a lot of stored procedures, and they can be fairly big (3000 – 4000 loc), so you can imagine the number of error checks we have in them. So in SQL Server 2005 Microsoft introduced the notion of structured exception handling as I mentioned above, and it was implemented through.
Microsoft MCSA: SQL Server 2012/2014 Certification Training Course MCSA: SQL Server 2012/2014
SQL SERVER – 2005 – Explanation of TRY.CATCH and ERROR. – Apr 11, 2007. ERROR_SEVERITY: returns the severity level of the error that invoked the CATCH block. ERROR_STATE: returns the state number of the error. ERROR_LINE: returns the line number where the error occurred. ERROR_PROCEDURE: returns the name of the stored procedure or trigger for which the error.
Error handling in TSQL procedure. stored procedure in SQL Server, sql-server-2008 sql-server-2005 tsql error-handling service-broker.
Ibm P550 Error Code Asynchronous Network Error Report After all, the world was moving to asynchronous packetized information switched. A single loose optical connector in their timing network produced a 75 nanosecond error, which led to global press coverage of their announcement that. Both players feature Oppo’s custom-made 4K disc loader (BDL-1601), which features
|
OPCFW_CODE
|