Media Server :: General Hardware Guidelines For Customers To Hear Permission-based Audio Stream
Oct 19, 2009
We're running a permission-based audio stream to be amplified through in-store sound systems, but are unsure about basic hardware requirements. What basic features must all hardware have to log into our server via username/password and receive audio signal?
I've had FMS running on my local machine for a while and have had a little experience writing FMS apps, but I've just tried recording audio for the first time using the standard vod application and I keep getting a "Write access denied for stream" error. My AS3 code is copied and pasted for various examples and am confident that it works.
I'm running Windows XP service pack 3 & FMIS 3.5.
I've had a look at the vod/media directory and under windows->properties the read-only attribute is ticked. Every time I un-tick this it reverts back to being ticked. I've googled this and MS say that most programs ignore the read-only attribute and that it only really applied to files. I've also tried the MS fix for setting the read-only attribute via cmd and still no joy (doesn't fix read-only attribute or FMS recording the audio after setting via cmd).
I've also tried our dev server install of FMS (running under linux) and am getting the same results.
Here's my AS3 code...
private function initApp(event:Event):void { removeEventListener(Event.ADDED_TO_STAGE,initApp);
I'm trying to setup an audio-only, on-demand HLS stream in FMS 3.5. I have no problems streaming the sample f4v files via HLS, nor do I have any issues streaming the mp3 files via RTMP to a Flash client. However, when I try to stream a sample mp3 via HLS (the mp3 file is located in the same directory as the sample f4v's), I get a 404 error. I can't find anything in the documentation about streaming audio via HLS on-demand.
"To serve streams over a cellular network, one of the streams must be audio-only. For more information, see HTTP Live Streaming Overview.To publish an audio-only stream, enter the following in the Flash Media Encoder Stream field:livestream%i?adbe-live-event=liveevent&adbe-audio-stream-name=livestream1_audio_only&adbe-audio-stream-src=livestream1If the encoder specifies individual query strings for each stream, use individual stream names instead of the variable %i:livestream1?adbe-live-event=liveevent&adbe-audio-stream-name=livestream1_audio_onlylivestream2?adbe-live-event=liveevent&adbe-audio-stream-name=livestream2_audio_onlyTo generate a set-level variant playlist when using an audio-only stream, specify the audio codec of the audio-nly stream. Specify the audio and the video codec of the streams that contain audio and video.For more information about using the Set-level F4M/M3U8 File Generator, see Publish and play live multi-bitrate streams over HTTP.
i'm trying to build some kind of online radio. Therefore i'll be streaming audio (mp3) in real time. i have a play-list on my server and the idea is for this play-list to play all day long, but on the client side the user can only do play and stop the music.
The problem is when one music stops and the next one starts i need the server to send not only the new audio but also all the info about this file: duration, author and music name, this way my player can show a play /stop and all the info about the current music playing.Until now i've only come across with video live streaming, but very little information on how to do a live transmission of audio.
ps: the only info on streaming audio that i've found so far is about audio on demand, but the way i want things to happen the client can't chose the music.
I need to improve the uptime of a 24/7 live audio stream. Periodically, the connection between FMLE and FMS (3.0.1) is broken and FMLE cannot reconnect. FMLE shows NetStream.Publish.BadName error. How do I prevent this error and get FMLE to automatically reconnect every time? What other measures can be taken to improve the uptime of a live audio stream publishing from FMLE 3.1 (Windows) to FMS 3.0.1? Also, are there any improvements in FMS 3.5 that will help with uptime?
I am using FMLE and fms 4 ent. When using fmle to broadcast webcam without audio. It goes really fast. I am on 1gbps connection with nothing else coming to the server at moment. I tested download speeds to and from the server and they are around 700mbs. I have tried all the different audio sample and bit rate settings. All have the same result. Once audio is turned on there becaomes a 15 second delay. What am I doing wrong.
I have read that you can stream protected audio/video to iOS based devices with the new FMS 4.5.Is it also possible to stream protected A/V to a HTML5/JS based Player.
HI installed flash media server 3.0.4 on a Windows 2003 server.I am using the default LIVE application and broadcasting to it using FME version 3.0 Inside FME I choose MP3 as the audio codec, stereo and 128kbps When I watch the stream everything works fine but after a minute (sometimes 2 minutes) the sound disappears and I can only see the video run.If I refresh the page that contains the SWF that plays the stream then the sound works again but then again after a minute the sound stops and only the video continues to play. It is important to say that this only happens when I play a live stream from a server that is located outside of my local LAN. If I play a stream from my development FMS server in my local LAN this problem doesn't happen using the same SWF and same settings in FME.Moreover, if I use the nellymoser audio codec then everything works fine and this problem doesn't happen also. The thing is that I want to broadcast a live event and it is very important that the audio sounds good (stereo and 192kbps) and if I use the nellymoser codec the results aren't that good.
I'd like to capture the audio data from an RTMFP stream to which the client is subscribed (so I get a bytearray of audio samples).The presence of the audioSampleAccess propery on the NetStream class certainly makes that sounds possibe: For RTMFP connections, specifies whether peer-to-peer subscribers on this NetStream are allowed to capture the audio stream. When FALSE, subscriber attempts to capture the audio stream show permission errors.[code]But in the case of audio, I dont know how to address the audio data to get it into a bytearray.My instinct said this wasnt possible, but the presence of the 'audioSampleAccess' property makes me think it might be..
I have read in FMIS 4 "new features" that "absolute timecode" allows to switch audio tracks while playing a video (managing synch). Is there any exemple showing how to use this funtionnality (server config, flash player action script exemple)?
I did a live stream last week using 282,482,832,1500Kbps streams. What would cause the audio to get out of sync with the live video stream? I'm trying to determine if it was bandwidth related, cpu/memory issue on the FMIS 4.5 server, or an issue with encoding PC exceeding it's limits?
I currently use FMS 4.5 to steam Live Technical Videos.
Hooked into Flash Media Live Encoder is a Roland VR-5 which lets me have multiple Camera switching.
I want to braodcast also a few live Audio only streams. I could easily use the VR-5 which in turn has a Behringer Audio Mixer attached to allow up to 16 Audio devices to be mixed in, and then just turn off the Video option in Flash Media Live Encoder but:
I don't want to have to put a video window on the Webpage. I just want an Audio controller.
So how can I use FlashMedia Server to just stream a live Audio feed?
I need to merge multiple live audio streams into a single stream so that i can pass this stream as input to VOIP through a softphone.For this i tried the following approach:Created a new stream (str1) on FMS onAppStart and recorded the live streams (sent throgh microphone) in that new stream.
Below is the code : application.onAppStart = function() {
I currently have two connections with two separate streams. They both hit the same fms 3.5 server. One connection transfers live audio and video. The other one is used for remote objects. Sometimes when viewing the audio and video stream with a slower internet connection, the stream for the shared objects disconnects. I think it is a bandwidth issue. Is there any way to set the priority of the streams? I think this should allow me to set a higher priority for the shared object connection so it won't disconnect.
When i publish the streams on a linux server, the .flv files have permissions like this:-rw-rw----. And i don't have permissions to access this file on a client browser... Currently, i can only change the permission using "chmod" manually
I am developing video chat application. I am able to play published video stream. But I am unable to hear sound which is published along with the video. The video is visible but sound is not comming.What is going wrong.
The problem I have is that on linux, the flv files generated by fms are having permission 660. Due to this they become kind of useless as I cant do much with them. Is there a way to make fms generate streams with better default permission?
I have gotten the vhost to work with the owner of the applications folder set to root:rootI need to allow them access to their files by cpanel and for that to work their files have tobe owned by their user name user:user well they vhosts can't be accessed unless itsroot:roo
I have a broadcasting project that I've got where the host of the show can have multiple sound effects they can use on their show.I'm wondering if the only way to go about this is for all the clients listening to the show need to download the mp3 of that sound effect or if the sound effect could come from the host ...like combining his mic (for his voice) but also straight audio (the sound effect) into the stream.
I was wondering if anyone could help me get some information on audio limitations in flash. Is it possible to have audio "scrubbing" that allows you to hear the audio as you scrub? Currently the only way I know to do this is by having it on the timeline, and set to stream. Using this method seems to be causing more issues than it solves. Preferably I would just like to load the audio in dynamically and then scrub through it. Is it possible to use that method and then still retain the ability to hear it as you scrub through it?I am having with the audio on the timeline. The main issue with this is that I have to manually add frames to the timeline, in addition to that I can not dynamically load the audio through xml. Well not that I know of.I think the best route is probably loading in the audio dynamically.
I have Flash Media Streaming Server 3.5 (not Interactive) running on RHEL5.5 x86_64 Linux.All is working well, however how do I prevent unauthorized access to connecting to the live stream and streaming content?How can I setup the server to require a user and password to stream live media to the server?I am new to this product and I have been reading some documentation but I have not found a clear cut answer on how to force a username and password to connect to the server to stream live content only.I am using the Adobe FMS Apache install, what files need changing?[code]I want to lock down a person from connecting to the server on the public internet and starting a live stream?Can this be done with a user name and password?
I'm trying to make a software which sends video and audio data to a flash media server by using RTMP protocol. Currently, my program can communicate with a flash media server correctly. RTMP specifications does not describe about the raw data in video/audio messages, so I muxed raw H.264 and AAC data into video/audio messages and sent to the server. The server seems to accept them, but a video player cannot playback the stream sending from the server. The player just says "Loading..." For a test purpose, I sniffed the network packets between Wirecast and the flash media server and ripped off only video and audio data. Then, I muxed those data into video/audio message and sent to the flash media server. In this case, the video player connected to the server can playback the stream correctly.
I checked the stream sent from Wirecast, the stream seems not to be H.264 raw data because those data are not started from 0x17 instead of H.264 start code. With those situation, I am wondering what kind of container format I should use for H.264/AAC data to the flash media server.
when i try to live stream with FMS! I can stream video with Flash media live encoder to the server but when i create the player to recieve the livestream from server,i can not recieve the live stream,can anyone give me a step by step tutorial of how to do it?
Only just getting started on this whole domain of learning, so go easy!If I set up a P2P video/audio chat (similar to the sample VideoPhone thing on the Cirrus site), can I get the stream from both parties to send to a server at the same time so that I can record it? If so, would I have to use a FMS to stream it to and perform the recording (and if so which version could I get away with)? Are there any (preferably free, or just tutorialised) solutions for the recording side of things?
Currently it seems like the only option for doing the P2P thing is to use Stratus/Cirrus unless I use FMS4 Enterprise.
how effective this kind of situation can be, in terms of quality of the stream and recording? Does any of this make sense?
i test the fms 4 update 1 rtmfp streams multicast after 10 minutes i get this message RTMFP Multicast stream has exceeded max duration allowed; closing stream. but i do not use IP multicast