Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[src/docs] Add xml documentation for types. #20672

Merged
merged 6 commits into from
Jun 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
16 changes: 16 additions & 0 deletions docs/api/ARKit/ARSession.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
<Documentation>
<Docs DocId="T:ARKit.ARSession">
<summary>Manages the camera capture, motion processing, and image analysis necessary to create a mixed-reality experience.</summary>
<remarks>
<para>An <see cref="T:ARKit.ARSession" /> object represents the system resources required for a mixed-reality experience. The <see cref="M:ARKit.ARSession.Run(ARKit.ARConfiguration,ARKit.ARSessionRunOptions)" /> method must be passed an <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=ARKit%20ARSession%20Configuration&amp;scope=Xamarin" title="T:ARKit.ARSessionConfiguration">T:ARKit.ARSessionConfiguration</a></format> object that controls specific ebhaviors. </para>
<para>Developers who use the <see cref="T:ARKit.ARSCNView" /> to present their AR imagery do not need to instantiate their own <see cref="T:ARKit.ARSession" /> object but instead should call <see cref="M:ARKit.ARSession.Run(ARKit.ARConfiguration,ARKit.ARSessionRunOptions)" /> on the <see cref="P:ARKit.ARSCNView.Session" /> property. For example:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var arView = new ARSCNView();
var arConfig = new ARWorldTrackingSessionConfiguration { PlaneDetection = ARPlaneDetection.Horizontal };
arView.Session.Run (arConfig);
]]></code>
</example>
</remarks>
</Docs>
</Documentation>
14 changes: 14 additions & 0 deletions docs/api/AVFoundation/AVAsset.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAsset">
<summary>Base class for timed video and audio.</summary>
<remarks>
<para>An <see cref="T:AVFoundation.AVAsset" /> represents one or more media assets. These are held in its <see cref="P:AVFoundation.AVAsset.Tracks" /> property. Additionally, <see cref="T:AVFoundation.AVAsset" />s include metadata, track grouping, and preferences about the media.</para>
<para>Because media assets such as movies are large, instantiating an <see cref="T:AVFoundation.AVAsset" /> will not automatically load the file. Properties are loaded when they are queried or via explicit calls to <see cref="M:AVFoundation.AVAsset.LoadValuesTaskAsync(System.String[])" /> or <see cref="M:AVFoundation.AVAsset.LoadValuesAsynchronously(System.String[],System.Action)" />.</para>
<para>During playback, the current presentation state of an <see cref="T:AVFoundation.AVAsset" /> is represented by an <see cref="T:AVFoundation.AVPlayerItem" /> object, and the playback is controlled by a <see cref="T:AVFoundation.AVPlayer" />:</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AssetPlayerItemPlayer.png" alt="UML Class Diagram illustrating classes relating to AVAsset" />
</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAsset_Class/index.html">Apple documentation for <c>AVAsset</c></related>
</Docs>
</Documentation>
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel">
<summary>Enumerates attenuation models used by <see cref="T:AVFoundation.AVAudioEnvironmentDistanceAttenuationParameters" />.</summary>
<remarks>
<para>Graph of <c>Gain</c> as Distance ranges from 0 to 10 with: <c>ReferenceDistance = 5</c>, <c>RolloffFactor = 0.5</c>, and <c>MaximumDistance = 20</c></para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Exponential" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Exponential.png" alt="Graph of exponential attenuation">
</img>
</para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Inverse" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Inverse.png" alt="Graph of inverse attenuation">
</img>
</para>
<para>
<see cref="F:AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Linear" />
</para>
<para>
<img href="~/AVFoundation/_images/AVFoundation.AVAudioEnvironmentDistanceAttenuationModel.Linear.png" alt="Graph of linear attenuation">
</img>
</para>
</remarks>
</Docs>
</Documentation>
29 changes: 29 additions & 0 deletions docs/api/AVFoundation/AVAudioRecorder.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioRecorder">
<summary>Audio recording class.</summary>
<remarks>
<para>
To create instances of this class use the factory method <format type="text/html"><a href="https://docs.microsoft.com/en-us/search/index?search=AVFoundation%20AVRecorder%20To%20Url(%20Foundation%20NSUrl%20, %20AVFoundation%20AVAudio%20Recorder%20Settings%20,Foundation%20NSError%20)&amp;scope=Xamarin" title="M:AVFoundation.AVRecorder.ToUrl(Foundation.NSUrl, AVFoundation.AVAudioRecorderSettings,Foundation.NSError)">M:AVFoundation.AVRecorder.ToUrl(Foundation.NSUrl, AVFoundation.AVAudioRecorderSettings,Foundation.NSError)</a></format></para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var settings = new AVAudioRecorderSettings () {
AudioFormat = AudioFormatType.LinearPCM,
AudioQuality = AVAudioQuality.High,
SampleRate = 44100f,
NumberChannels = 1
};
var recorder = AVAudioRecorder.ToUrl (url, settings, out error);
if (recorder == null){
Console.WriteLine (error);
return;
}
recorder.PrepareToRecord ();
recorder.Record ();
]]></code>
</example>
</remarks>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Media/Sound/Play_Sound">Play Sound</related>
<related type="recipe" href="https://developer.xamarin.com/ios/Recipes/Media/Sound/Record_Sound">Record Sound</related>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioRecorder_ClassReference/index.html">Apple documentation for <c>AVAudioRecorder</c></related>
</Docs>
</Documentation>
67 changes: 67 additions & 0 deletions docs/api/AVFoundation/AVAudioSession.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVAudioSession">
<summary>Coordinates an audio playback or capture session.</summary>
<remarks>
<para> Application developers should use the singleton object
retrieved by <see cref="M:AVFoundation.AVAudioSession.SharedInstance" />.
</para>
<para>
Because the audio hardware of an iOS device is shared
between all apps, audio settings can only be "preferred" (see
<c>SetPreferred*</c> methods) and the application developer
must account for use-cases where these preferences are
overridden.
</para>
<para>
The interaction of an app with other apps and system
services is determined by your audio category. You can use the <see cref="M:AVFoundation.AVAudioSession.SetCategory(System.String,System.String,AVFoundation.AVAudioSessionRouteSharingPolicy,AVFoundation.AVAudioSessionCategoryOptions,Foundation.NSError@)" /> method to set this
</para>
<para>
You should also control the Mode (using <see cref="M:AVFoundation.AVAudioSession.SetMode(Foundation.NSString,Foundation.NSError@)" /> to
describe how your application will use audio.

</para>
<para>
As is common in AV Foundation, many methods in <see cref="T:AVFoundation.AVAudioSession" /> are
asynchronous and properties may take some time to reflect
their final status. Application developers should be familiar
with asynchronous programming techniques.
</para>
<para>
The <see cref="T:AVFoundation.AVAudioSession" />,
like the <see cref="T:AVFoundation.AVCaptureSession" /> and <see cref="T:AVFoundation.AVAssetExportSession" /> is a
coordinating object between some number of <see cref="P:AVFoundation.AVAudioSession.InputDataSources" />
and <see cref="P:AVFoundation.AVAudioSession.OutputDataSources" />.
</para>
<para>
You can register to a few notifications that are posted by the audio system, by using the convenience methods in <see cref="T:AVFoundation.AVAudioSession.Notifications" />.

</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
void Setup ()
{
AVAudioSession.SharedInstance ().Init ();
NSError error;
if (!AVAudioSession.SharedInstance ().SetCategory (AVAudioSessionCategory.Playback, out error)) {
ReportError (error);
return;
}
AVAudioSession.Notifications.ObserveInterruption (ToneInterruptionListener);

if (!AVAudioSession.SharedInstance ().SetActive (true, out error)) {
ReportError (error);
return;
}

void ToneInterruptionListener (object sender, AVAudioSessionInterruptionEventArgs interruptArgs)
{
//
}
}
]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVAudioSession_ClassReference/index.html">Apple documentation for <c>AVAudioSession</c></related>
</Docs>
</Documentation>
11 changes: 11 additions & 0 deletions docs/api/AVFoundation/AVCaptureConnection.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVCaptureConnection">
<summary>The link between capture input and capture output objects during a capture session.</summary>
<remarks>
<para>A <see cref="T:AVFoundation.AVCaptureConnection" /> encapsulates the link between an <see cref="T:AVFoundation.AVCaptureInput" /> (more specifically, between an individual <see cref="T:AVFoundation.AVCaptureInputPort" /> in the <see cref="P:AVFoundation.AVCaptureInput.Ports" /> property of the <see cref="T:AVFoundation.AVCaptureInput" /> and the <see cref="T:AVFoundation.AVCaptureOutput" />).</para>
<para>
<see cref="T:AVFoundation.AVCaptureConnection" />s are formed automatically when inputs and outputs are added via <see cref="M:AVFoundation.AVCaptureSession.AddInput(AVFoundation.AVCaptureInput)" /> and <see cref="M:AVFoundation.AVCaptureSession.AddOutput(AVFoundation.AVCaptureOutput)" />.</para>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureConnection_Class/index.html">Apple documentation for <c>AVCaptureConnection</c></related>
</Docs>
</Documentation>
185 changes: 185 additions & 0 deletions docs/api/AVFoundation/AVCaptureSession.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
<Documentation>
<Docs DocId="T:AVFoundation.AVCaptureSession">
<summary>Coordinates a recording session.</summary>
<remarks>
<para>
The AVCaptureSession object coordinates the recording of video
or audio input and passing the recorded information to one or
more output objects. As the iOS line has advanced, different devices have gained multiple capture devices (in particular, gained multiple cameras). Application developers can use <see cref="M:AVFoundation.AVCaptureDevice.DefaultDeviceWithMediaType(System.String)" /> or <see cref="M:AVFoundation.AVCaptureDevice.DevicesWithMediaType(System.String)" />, passing in the constants defined in <see cref="T:AVFoundation.AVMediaType" />.
</para>
<para>
Configuring capture consists of setting the <see cref="P:AVFoundation.AVCaptureSession.Inputs" /> and <see cref="P:AVFoundation.AVCaptureSession.Outputs" /> properties of the <see cref="T:AVFoundation.AVCaptureSession" />. Notice that multiple <see cref="T:AVFoundation.AVCaptureInput" />s and <see cref="T:AVFoundation.AVCaptureOutput" />s are possible. For instance, to capture both audio and video, one would use two capture inputs:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var session = new AVCaptureSession();

var camera = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video);
var mic = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Audio);
if(camera == null || mic == null){
throw new Exception("Can't find devices");
}

var cameraInput = AVCaptureDeviceInput.FromDevice (camera);
//info.plist _must_ contain NSMicrophoneUsageDescription key
var micInput = AVCaptureDeviceInput.FromDevice (mic);

if(session.CanAddInput(cameraInput)){
session.AddInput(cameraInput);
}
if(session.CanAddInput(micInput)){
session.AddInput(micInput);
}
]]></code>
</example>
<para>Note that permission to access the microphone (and in some regions, the camera) must be given by the user, requiring the developer to add the <c>NSMicrophoneUsageDescription</c> to the application's info.plist file.</para>
<para>Video can be captured directly to file with <see cref="T:AVFoundation.AVCaptureMovieFileOutput" />. However, this class has no display-able data and cannot be used simultaneously with <see cref="T:AVFoundation.AVCaptureVideoDataOutput" />. Instead, application developers can use it in combination with a <see cref="T:AVFoundation.AVCaptureVideoPreviewLayer" />, as shown in the following example:</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
var layer = new AVCaptureVideoPreviewLayer (session);
layer.VideoGravity = AVLayerVideoGravity.ResizeAspectFill;

var cameraView = new UIView ();
cameraView.Layer.AddSublayer (layer);

var filePath = Path.Combine (Path.GetTempPath (), "temporary.mov");
var fileUrl = NSUrl.FromFilename (filePath);

var movieFileOutput = new AVCaptureMovieFileOutput ();
var recordingDelegate = new MyRecordingDelegate ();
session.AddOutput (movieFileOutput);

movieFileOutput.StartRecordingToOutputFile (fileUrl, recordingDelegate);
]]></code>
</example>
<para>Application developers should note that the function <see cref="M:AVFoundation.AVCaptureFileOutput.StopRecording" /> is asynchronous; developers should wait until the <see cref="M:AVFoundation.AVCaptureFileOutputRecordingDelegate.FinishedRecording(AVFoundation.AVCaptureFileOutput,Foundation.NSUrl,Foundation.NSObject[],Foundation.NSError)" /> delegate method before manipulating the file (for instance, before saving it to the Photos album with <see cref="M:UIKit.UIVideo.SaveToPhotosAlbum(System.String,UIKit.UIVideo.SaveStatus)" /> or <see cref="M:AssetsLibrary.ALAssetsLibrary.WriteVideoToSavedPhotosAlbumAsync(Foundation.NSUrl)" />).</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[
public class MyRecordingDelegate : AVCaptureFileOutputRecordingDelegate
{
public override void FinishedRecording (AVCaptureFileOutput captureOutput, NSUrl outputFileUrl, NSObject [] connections, NSError error)
{
if (UIVideo.IsCompatibleWithSavedPhotosAlbum (outputFileUrl.Path))
{
var library = new ALAssetsLibrary ();
library.WriteVideoToSavedPhotosAlbum (outputFileUrl, (path, e2) =>
{
if (e2 != null)
{
new UIAlertView ("Error", e2.ToString (), null, "OK", null).Show ();
}
else
{
new UIAlertView ("Saved", "Saved to Photos", null, "OK", null).Show ();
File.Delete (outputFileUrl.Path);
}
});
}
else
{
new UIAlertView ("Incompatible", "Incompatible", null, "OK", null).Show ();
}

}
} ]]></code>
</example>
<para>
Application developers can configure one or more output ports for the
captured data, and these can be still frames, video frames
with timing information, audio samples, quicktime movie files, or can be rendered directly to a CoreAnimation layer.

</para>
<para>
Once the input and output components of
the session are set, the actual processing is begun by calling the
<see cref="M:AVFoundation.AVCaptureSession.StartRunning" />
method.

</para>
<example>
<code lang="csharp lang-csharp"><![CDATA[

void SetupCapture ()
/ configure the capture session for low resolution, change this if your code
// can cope with more data or volume
session = new AVCaptureSession () {
SessionPreset = AVCaptureSession.PresetMedium
};

// create a device input and attach it to the session
var captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video);
var input = AVCaptureDeviceInput.FromDevice (captureDevice);
if (input == null){
Console.WriteLine ("No video input device");
return false;
}
session.AddInput (input);

// create a VideoDataOutput and add it to the sesion
var output = new AVCaptureVideoDataOutput () {
VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA),

// If you want to cap the frame rate at a given speed, in this sample: 15 frames per second
MinFrameDuration = new CMTime (1, 15)
};

// configure the output
queue = new MonoTouch.CoreFoundation.DispatchQueue ("myQueue");
outputRecorder = new OutputRecorder ();
output.SetSampleBufferDelegateAndQueue (outputRecorder, queue);
session.AddOutput (output);

session.StartRunning ();
}

public class OutputRecorder : AVCaptureVideoDataOutputSampleBufferDelegate {
public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection)
{
try {
var image = ImageFromSampleBuffer (sampleBuffer);

// Do something with the image, we just stuff it in our main view.
AppDelegate.ImageView.BeginInvokeOnMainThread (delegate {
AppDelegate.ImageView.Image = image;
});

//
// Although this looks innocent "Oh, he is just optimizing this case away"
// this is incredibly important to call on this callback, because the AVFoundation
// has a fixed number of buffers and if it runs out of free buffers, it will stop
// delivering frames.
//
sampleBuffer.Dispose ();
} catch (Exception e){
Console.WriteLine (e);
}
}

UIImage ImageFromSampleBuffer (CMSampleBuffer sampleBuffer)
{
// Get the CoreVideo image
using (var pixelBuffer = sampleBuffer.GetImageBuffer () as CVPixelBuffer){
// Lock the base address
pixelBuffer.Lock (0);
// Get the number of bytes per row for the pixel buffer
var baseAddress = pixelBuffer.BaseAddress;
int bytesPerRow = pixelBuffer.BytesPerRow;
int width = pixelBuffer.Width;
int height = pixelBuffer.Height;
var flags = CGBitmapFlags.PremultipliedFirst | CGBitmapFlags.ByteOrder32Little;
// Create a CGImage on the RGB colorspace from the configured parameter above
using (var cs = CGColorSpace.CreateDeviceRGB ())
using (var context = new CGBitmapContext (baseAddress,width, height, 8, bytesPerRow, cs, (CGImageAlphaInfo) flags))
using (var cgImage = context.ToImage ()){
pixelBuffer.Unlock (0);
return UIImage.FromImage (cgImage);
}
}
}
}

]]></code>
</example>
</remarks>
<related type="externalDocumentation" href="https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureSession_Class/index.html">Apple documentation for <c>AVCaptureSession</c></related>
</Docs>
</Documentation>
Loading
Loading