Can someone show me how to fix this?
Most of the references are in ORANGE
Can someone show me how to fix this?
Most of the references are in ORANGE
Hallo zusammen,
ich bin absoluter Neuling mit Labview und würde mich sehr über eure Hilfe freuen.
Ich möchte zum Einstellen und Auslesen von Daten eines Temperaturreglers ein Programm schreiben. Der Temperaturregler (CN7600) ist über eine RS-485 Schnittstelle an dem PC angeschlossen. Es wird ASCII, bPS: 9600, Länge: 7, Parität: Even und Stopbyte: 1 verwendet.
Soweit ich das sehe ist die "Communication List" (siehe Anhang) in Hexadecimal geschrieben.
Das Gerät wird in dem "Measurement & Automation Explorer" erkannt.
Es soll der aktuelle "Process Value" gelesen werden, allerdings funktionier das Schreiben und Einlesen von Daten nicht.
Ich würde mich über jegliche Kommentare und Hilfe freuen.
Hey fancy folk,
I was wondering if anyone knew how the third octave data is calculated in the sound and vibe toolkit. From what I can tell, the octave analysis VI has a time averaging feature to give me the SPL of a specific band. I ask because I'm trying to do third octave analysis via summing the spectrum lines of the FFT.
I currently have a VI that will load a TDMS file and send to a queue which allows me to perform a 50% overlap on my data. Next, I window that data and send it to an FFT. After that, I search through the waveform's spectral lines and perform an RMS calculations on anything that is in the third octave bands. Finally this data is A-weighting. However, This method doesn't appear to give me the correct data, based on the data another coworker has gotten and what the third octave analysis VI has given me. I then tried using the power in band vi to calculate the band power for the third octave frequencies. From what I can tell, it is closer to what I'm expecting my data to look like. However, I am not sure how that VI calculates the band power either, so it feels like I'm chasing my tail on it.
Attached is a zip file with my code and the background noise to my chamber. The noise is 10 seconds long and IIRC, gives 71 FFT samples. The main code to execute is the "Forum-third-octave-FFT-based.vi". The "FFT spectrum info to octave info" is a VI that I made to attempt to transfer as much spectrum info from the FFT to octave VI's for the third octave calculations that I perform. Main reason is that after the calculations, I want to perform A-weighting on the third octave data that I have calculated via the RMS of the spectral lines. The TDMS file is the background noise to my chamber.
Currently, I dont think I'm doing the calculations for the higher frequencies correct. From what I can tell in documentation, the higher frequency data tends to be decimated and I am doing no decimation. I'm not sure if thats important or not, but I figured I'd throw that out there. The decimation may be for collecting data only, again, not too sure.
I am hoping someone can clear up how the third octave info is calculated via the third octave analysis VI's and/or take a look at my code to see if I'm doing anything wrong with calculating the third octave SPL via the FFT spectrum.
Thanks,
Matt
Greetings all,
I apologize in advance if this topic has been addressed, I couldn't find anything after searching.
I have a fixed point number <+/- 13,13> which goes from -4096 to 4095. I would like to divide that by 2048, which happens to be a very simple bit shift operation. If you keep the word and shift the q-point to cast it as <13,2>, I get the answer I'm looking for.
This works outside of an FPGA, but the type-casting block is not FPGA-compatible.
The 'logical shift' block help documentation says to use the 'scale by power of 2' block with signed data types, however when fed with a <13,13> fxp source it outputs a <13,13> fxp, so I just lose all of the fractional bits.
Is there some way to re-cast (and be compatible with FPGA) the fixed point variable such that the word isn't changed but the virtual q-point gets moved? I can accomplish it with a 'high throughput divide' block but that seems like overkill for a simple shift scaling.
This is a super simple / common operation so I must be missing something. In other languages this would be trivial. Any thoughts?
Thanks in advance,
Chris
EDIT:
About 5 minutes after posting the question, I figured it out. Isn't that the way it always goes?
Anyways, for those that see this when looking in the future, I used a 'convert number to boolean array' connected to a 'convert boolean array to number' blocks, and you can set the output format of the 'convert boolean array to number' block to whatever fixed point format the heart desires.
I finally got a VM (VirtualBox) based FPGA compile worker (ISE14.7) and CentOS working. I was pleasantly surprised that my compile times have dropped by almost 50% even when running the VM on the same machine as my original LabView FPGA installation.
Is there a way to get the compile worker to auto-start when the VM is powered on (With no user needing to log into the VM)?
Thanks,
XL600
I am fairly new to using LabVIEW and OPC UA Tookit 2017. I would also not consider myself a LabVIEW expert but I feel like I generally know to use it.
Anyway, I can't figure out how to create an OPC UA item using "Add Item VI" for array of float. I can create it but only with one item, I need to add more elements within the array and change the value. Can someone provide sample or guidance how to achieve this?
References:
OPC UA ToolKit - http://sine.ni.com/nips/cds/view/p/lang/en/nid/215329
//YxH
Hi,
Anyone ever try to do color calibration across multiple cameras to known value?
My situation is this:
1. Four identical setups with the same make and model 4k color line camera and LED lights
2. All four cameras have been white balanced according to the manufacture specs
3. On running the same product through all the different lines we noticed slight variations in color from one system to the next
4. So we built a calibration fixture with known color targets (RGB values provided by the manufacture of the color card). Colors are a known red, green, blue, yellow, black, and white
5. Next thought was okay this should be easy, I have an image of a known color, Draw an ROI inside the known color extract the average value and compare that to the known to determine the ratio offsets of each layer.
6. To me that logic seemed solid but when I try to do that I end up with images that are less calibrated then before because I start to "over saturate" certain layers (i.e. push the value over 255) in certain colors so I am not producing the results I really need.
Can someone explain to me where my logic gets flawed. I have a decent background in machine vision but color is not something I have not done a whole lot in. Anything I have ever done with color was always "good enough" with the default white balances from the camera manufacture.
Sharing code on this will be a little difficult but I can share snippets if needed. Mostly I am looking for help in the design framework though more than the actual "nuts and bolts" of the coding.
Please let me know if you have any questions and I will do my best to answer them. Thanks for your help in advanced.
Hi everyone,
I am searching a solution for my application to store creditials into a file (user ID/password) for my database. I would like to implement a solution like C# or .NET with App.Config.
I already use windows creditial but I want to add usage of specific user defined into the database. The database is mainly Access/SQL Server.
Thanks!
Can anyone explain me what does this code mean? How and where can I put the cRIO output in the code?
I have interfaced SIM28ML GPS module with MyRIO.I connected the GPS Module using RS232 cable.I am getting the following error
"The resource is valid,but VISA cannot currently access it"
I have attached the VI and the screenshots of the error.Please help me to debug the error
Hi to all.
I have an application which I do golden template comparison.
The probloem is that sometimes the object inspection in image inspection is a little smaller or bigger (becouse photo taken not because of object).
So for that reason before golden template I use geometric pattern matching to pass output orientation and scale to input Alignment in Golden Template.
Now I would like to understand Registration Method input in Golde Template Comparison, changing this input application would work better in that case??
Thanks a lot
I have problems when setting up a shared variable process to log to an already existing database. This works fine on my development machine (Labview and DSC 2015) but not on a run-time installation. There the varibles in the second process will not be logged before i Validate the databse setting (every time I start the program) and choose : 'Use existing database for this project library'. I need to open up the NI distribution manager and select (right-click on teh process) 'edit process' to manually resolve the conflict.
Note that all variables are set-up in the vi so I do not have any shared variable libraru in the labview project.
I want to do this validation, also described in the link below, directly from the vi:
http://zone.ni.com/reference/en-XX/help/371618H-01/lvdsc/dsc_valid_db_namepath/
I can not find how to set these properties in any configuartion vi for eiher teh dSC databse or the variable processes.
Any suggestions?
Regards,
Mattias
Hi!
solving the problem with NXG from Beta to 2.0 I deleted manually many files, maybe too much.
At the moment, NXG is working but trying to install web module, Package manager says me:
Unable to locate package: ni-labview-nxg-2.0.0-web-module
It seems that a feed is missing.
I've checked feeds using the gear at the right-top of package manager but it seems there is no feed about web module.
Should it be there a feed about web module?
If you have it, could you give me package name and location to add it manually?
Thank you!
Hi All,
I went through a number of white papers, but haven't really found what I need to deal with the problem.
To give you an idea of what we do:
1) We capture data of a detector data(datapoints, pixel)
2) We sort this data in higher dimension arrays to be able to average and do computations on it. Such Data array can be of the dimension(10, 3000, 128, 4). These arrays alone need big parts of the memory - hints on making that more efficient are appreciated!
This is done in a way that we first initialize the arrays once, then go into a measurements loop where we feed the arrays using shift registers from iteration to another. Inside the loop we do not use indicators or local variables to not waste memory. To make the code easier to read I created a typedef of the cluster containing all 8 arrays. This cluster is passed to the measurement loop and therein from vi to vi. I now found that passing such a cluster can be very inefficient in case of large arrays - any recommendation of what to do apart from passing every single array to every vi? We really struggle with memory load here as starting this routine easily blocks 3+ GB of RAM.
best,
Julian
Hello,
I am very new with LabView. I need to record few temperature data using K and T type thermocouples and also voltage of a battery.
When I selected CJC source as built in , following error is showing :
Error -200077 occurred at DAQ Assistant
Possible Reason(s):
Requested value is not a supported value for this property. The property value may be invalid because it conflicts with another property.
Property: AI.Thrmcpl.CJCSrc
Requested Value: Built-In
Possible Values: Constant Value, Channel
Channel Name: Temperature
I selected constant value and select the reference temperature at 25 oC , It was not working , After that I fixed the reference temperature at 0 degree Celsius, It was not working,
I have used channel as CJC type, and I have selected one thermocouple as constant , and for another thermocouple I chose channel and refereed the first one as CJC channel.
Unfortunately nothing worked.
Could you please suggest me , how can I sort out of this problem using NI USB 6212 ? or what would be the best possible solution to measure the temperature using Ni DAQ?
Thanks,
Masuma
I am looking for a working example of the Linear Algebra Matrix Multiply function.
The goal is to multiply a 3x4 matrix with 4x1 Vector giving a 3x1 vector.
Attached is what I have so far, but it is not working correctly, because I am obviously not feeding the right matrix row into the Multiply block.
Any help would be very appreciated...
Best regards,
Michael
Hi Guys,
Quick rookie question.
I'm transmitting data over RS232 and I've formatted it like S: (decimal), MA: (decimal), P: (decimal),
(no space between colon and the bracket, added to avoid emojis)
The transmission script in my controller is coded like this: \n being new line.
print ("S:", Speed," ,\n","MA:", MotorAmps," ,\n","P:", Power," ,\n\r")
This is a cyclic telemetry i.e. it gets repeated every 50ms.
I receive this data using NI VISA, all seems good when I probe on the Read VI in the same loop. Refresh rate is the same too.
This is the probe data that I get:
+
+
S:0 ,
MA:2 ,
P:8 ,
S:0 ,
MA:4 ,
P:6 ,
VI (Attached)
I've tried one single string and other ways but can't get around it.
This one however works better but the string output takes atleast 3 times more to refresh. As if the scanning of the string isn't happening as fast. Some times there are misses and mixing of values too.
Any useful advice would be appreciated.
The goal is to get individual values of S, MA and P and later use them.
I am trying to read out the physical name of a virtual channel I created in MAX. But I get this error when I try to read the channel:
Possible reason(s):
Task contains physical channels on one or more devices that require you to specify the Sample Clock rate. Use the Sample Clock Timing function/VI to specify a Sample Clock rate.
You cannot specify a Sample Clock rate if Mode is set to On Demand.
Device: cDAQ9184-1B1F0FE
Task Name: Loadcell AE 01
Here is all of the info I have.
Windows 7 SP1
LabVIEW 2017
cDAQ 9184 (I do not believe that it matter here because I also tried it on a 9188XT and got the same result)
NI-9237
When I simply go into MAX and create a new channel no matter the name. I use the full bridge configuration and do not have to add a scale to get this issues. Then I go to my block diagram and pull out a DAQmx Channel property node and try to read the PhysicalChanName I get the error posted above. I get the channel names for other channels I have created but I can not seem to get them for the channels on the 9237 module. Am I missing something here? I am not sure why you would have to set up a sample clock to read a channel name. I am trying to get the channel each Global Channel is associated with to allow the operator to select valid channels and to make sure they can see where they should be connected on the panel.
If I add these channels to a task and set the active channel I can then read out the PhysicalChanName but not the channel individually.
Any thought?
I'm looking to generate a custom control that displays the radial position of a gear. I have attached a VI for a gauge that rolls over every 10 counts to simulate one rotation. I've also attached a .png of the gear that I would like to display.
The problem I encounter is when I try to replace the needle in the control customization window and test it, the gear does not rotate on its center axis. Instead it just re-positions itself as it would look like its attached to the end of the needle. An example of this is attached as well. Does anyone have any advice on how to get the gear image to rotate on its central axis?
Thanks!
The VI has three buttons for the user to check for the Coldest, Hottest and Rainy Day of the week.
o For simplicity, output the following using an event structure:
o Monday for the Coldest Day
o Tuesday for the Hottest Day
o Wednesday for the Rainy Day
o The VI While Loop counter will only increment when one of the three buttons is pressed.
o After 10 seconds when no button is pressed, the output should default to the Hottest day of the week.